Tag Archive | "&amp"

5 Common Objections to SEO (& How to Respond) – Whiteboard Friday

Posted by KameronJenkins

How many of these have you heard over the years? Convincing clients and stakeholders that SEO is worth it is half the battle. From doubts about the value of its traffic to concerns over time and competition with other channels, it seems like there’s an argument against our jobs at every turn. 

In today’s Whiteboard Friday, Kameron Jenkins cover the five most common objections to SEO and how to counter them with smart, researched, fact-based responses.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hey, everybody. Welcome to this week’s edition of Whiteboard Friday. My name is Kameron Jenkins, and today we’re going to be going through five common objections to SEO and how to respond. Now I know, if you’re watching this and you’re an SEO, you have faced some of these very objections before and probably a lot of others.

This is not an exhaustive list. I’m sure you’ve faced a ton of other objections, whether you’re talking to a potential client, maybe you’re talking to your friend or your family member. A lot of people have misunderstandings about SEO and that causes them to object to wanting to invest in it. So I thought I’d go through some of the ones that I hear the most and how I tend to respond in those situations. Hopefully, you’ll find that helpful.

1. “[Other channel] drives more traffic/conversions, so it’s better.”

Let’s dive in. The number one objection I hear a lot of the time is this other channel, whether that be PPC, social, whatever, drives more traffic or conversions, therefore it’s better than SEO. I want to respond a few different ways depending. 

Success follows investment

So the number one thing I would usually say is that don’t forget that success follows investment.

So if you are investing a lot of time and money and talent into your PPC or social and you’re not really doing much with organic, you’re kind of just letting it go, usually that means, yeah, that other channel is going to be a lot more successful. So just keep that in mind. It’s not inherently successful or not. It kind of reflects the effort you’re putting into it.

Every channel serves a different purpose

Number two, I would say that every channel serves a different purpose. You’re not going to expect social media to drive conversions a lot of the time, because a lot of the time social is for engagement. It’s for more top of the funnel. It’s for more audience development. SEO, a lot of the time that lives at your top and mid-funnel efforts. It can convert, but not always.

So just keep that in mind. Every channel serves a different purpose. 

Assists vs last click only

The last thing I would say, kind of dovetailing off of that, is that assists versus last click only I know is a debate when it comes to attribution. But just keep in mind that when SEO and organic search doesn’t convert as the last click before conversion, it still usually assists in the process. So look at your assisted conversions and see how SEO is contributing.

2. “SEO is dead because the SERPs are full of ads.”



The number two objection I usually hear is SEO is dead because the SERPs are full of ads. To that, I would respond with a question. 

What SERPs are you looking at? 

It really depends on what you’re querying. If you’re only looking at those bottom funnel, high cost per click, your money keywords, absolutely those are monetized.

Those are going to be heavily monetized, because those are at the bottom of the funnel. So if you’re only ever looking at that, you might be pessimistic when it comes to your SEO. You might not be thinking that SEO has any kind of value, because organic search, those organic results are pushed down really low when you’re looking at those bottom funnel terms. So I think these two pieces of research are really interesting to look at in tandem when it comes to a response to this question.

I think this was put out sometime last year by Varn Research, and it said that 60% of people, when they see ads on the search results, they don’t even recognize that they’re ads. That’s actually probably higher now that Google changed it from green to black and it kind of blends in a little bit better with the rest of it. But then this data from Jumpshot says that only about 2% to 3% of all search clicks go to PPC.

So how can these things coexist? Well, they can coexist because the vast majority of searches don’t trigger ads. A lot more searches are informational and navigational more so than commercial. 

People research before buying

So just keep in mind that people are doing a lot of research before buying.

A lot of times they’re looking to learn more information. They’re looking to compare. Keep in mind your buyer’s entire journey, their entire funnel and focus on that. Don’t just focus on the bottom of the funnel, because you will get discouraged when it comes to SEO if you’re only looking there. 

Better together

Also, they’re just better together. There are a lot of studies that show that PPC and SEO are more effective when they’re both shown on the search results together for a single company.

I’m thinking of one by Seer, they did right now, that showed the CTR is higher for both when they’re on the page together. So just keep that in mind. 

3. “Organic drives traffic, just not the right kind.”

The number three objection I hear a lot is that organic drives traffic, just not the right kind of traffic. People usually mean a few different things when they say that. 

Branded vs non-branded

Number one, they could mean that organic drives traffic, but it’s usually just branded traffic anyway.

It’s just people who know about us already, and they’re searching our business name and they’re finding us. That could be true. But again, that’s probably because you’re not investing in SEO, not because SEO is not valuable. I would also say that a lot of times this is pretty easily debunked. A lot of times inadvertently people are ranking for non-branded terms that they didn’t even know they were ranking for.

So go into Google Search Console, look at their non-branded queries and see what’s driving impressions and clicks to the website. 

Assists are important too

Number two, again, just to say this one more time, assists are important too. They play a part in the eventual conversion or purchase. So even if organic drives traffic that doesn’t convert as the last click before conversion, it still usually plays a role.

It can be highly qualified

Number three, it can be highly qualified. Again, this is that following the investment thing. If you are actually paying attention to your audience, you know the ways they search, how they search, what terms they search for, what’s important to your brand, then you can bring in really highly qualified traffic that’s more inclined to convert if you’re paying attention and being strategic with your SEO.

4. “SEO takes too long”

Moving on to number four, that objection I hear is SEO takes too long. That’s honestly one of the most common objections you hear about SEO. 

SEO is not a growth hack

In response to that, I would say it’s not a growth hack. A lot of people who are really antsy about SEO and like “why isn’t it working right now” are really looking for those instant results.

They want a tactic they can sprinkle on their website for instant whatever they want. Usually it’s conversions and revenue and growth. I would say it’s not a growth hack. If you’re looking at it that way, it’s going to disappoint you. 

Methodology + time = growth

But I will say that SEO is more methodology than tactic. It’s something that should be ingrained and embedded into everything you do so that over time, when it’s baked into everything you’re doing, you’re going to achieve sustained growth.

So that’s how I respond to that one. 

5. “You can’t measure the ROI.”

Number five, the last one and probably one of the most frustrating, I’m sure this is not exclusive to SEO. I know social hears it a lot. You can’t measure the ROI, therefore I don’t want to invest in it, because I don’t have proof that I’m getting a return on this investment. So people kind of tend to mean, I think, two things when they say this.

A) Predicting ROI

Number one, they really want to be able to predict ROI before they even dive in. They want assurances that if I invest in this, I’m going to get X in return, which there are a lot of, I think, problems with that inherently, but there are some ways you can get close to gauging what you’re going to get for your efforts. So what I would do in this situation is use your own website’s data to build yourself a click-through rate curve so that you know the click-through rate at your various rank positions.

By knowing that and combining that with the search volume of a keyword or a phrase that you want to go after, you can multiply the two and just say, “Hey, here’s the expected traffic we will get if you will let me work on improving our rank position from 9 to 2 or 1″ or whatever that is. So there are ways to estimate and get close.

A lot of times, when you do improve, you’re focusing on improving one term, you’re likely going to get a lot more traffic than what you’re estimating because you tend to end up ranking for so many more longer tail keywords that bring in a lot of additional search volume. So you’re probably going to even underestimate when you do this. But that’s one way you can predict ROI. 

B) Measuring ROI



Number two here, measuring ROI is a lot of times what people want to be doing.

They want to be able to prove that what they’re doing is beneficial in terms of revenue. So one way to do this is to get the lifetime value of the customer, multiply that by the close rate so that you can have a goal value. Now if you turn on your conversions and set up your goals in Google Analytics, which you I think should be doing, this assumes that you’re not an e-commerce site.

There’s different tracking for that, but a similar type of methodology applies. If you apply these things, you can have a goal value. So that way, when people convert on your site, you start to rack up the actual dollar value, the estimated dollar value that whatever channel is producing. So you can go to your source/medium report and see Google organic and see how many conversions it’s producing and how much value.

This same thing applies if you go to your assisted conversions report. You can see how much value is in there as well. I think that’s really beneficial just to be able to show people like, “Look, it is generating revenue.My SEO that’s getting you organic search traffic is generating value and real dollars and cents for you.” So those are some of the most common objections that I hear.

I want to know what are some of the ones that you hear too. So pop those in the comments. Let me know the objections you hear a lot of the time and include how you’re either struggling to respond or find the right response to people or something that you found works as a response. Share that with us. We’d all love to know. Let’s make SEO better and something that people understand a lot better. So that’s it for this week’s Whiteboard Friday.

Come back again next week for another one.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

The Real Impact of Mobile-First Indexing & The Importance of Fraggles

Posted by Suzzicks

While SEOs have been doubling-down on content and quality signals for their websites, Google was building the foundation of a new reality for crawling — indexing and ranking. Though many believe deep in their hearts that “Content is King,” the reality is that Mobile-First Indexing enables a new kind of search result. This search result focuses on surfacing and re-publishing content in ways that feed Google’s cross-device monetization opportunities better than simple websites ever could.

For two years, Google honed and changed their messaging about Mobile-First Indexing, mostly de-emphasizing the risk that good, well-optimized, Responsive-Design sites would face. Instead, the search engine giant focused more on the use of the Smartphone bot for indexing, which led to an emphasis on the importance of matching SEO-relevant site assets between desktop and mobile versions (or renderings) of a page. Things got a bit tricky when Google had to explain that the Mobile-First Indexing process would not necessarily be bad for desktop-oriented content, but all of Google’s shifting and positioning eventually validated my long-stated belief: That Mobile-First Indexing is not really about mobile phones, per se, but mobile content.

I would like to propose an alternative to the predominant view, a speculative theory, about what has been going on with Google in the past two years, and it is the thesis of my 2019 MozCon talk — something we are calling Fraggles and Fraggle-based Indexing

 I’ll go through Fraggles and Fraggle-based indexing, and how this new method of indexing has made web content more ‘liftable’ for Google. I’ll also outline how Fraggles impact the Search Results Pages (SERPs), and why it fits with Google’s promotion of Progressive Web Apps. Next, I will provide information about how astute SEO’s can adapt their understanding of SEO and leverage Fraggles and Fraggle-Based Indexing to meet the needs of their clients and companies. Finally, I’ll go over the implications that this new method of indexing will have on Google’s monetization and technology strategy as a whole.

Ready? Let’s dive in.

Fraggles & Fraggle-based indexing

The SERP has changed in many ways. These changes can be thought of and discussed separately, but I believe that they are all part of a larger shift at Google. This shift includes “Entity-First Indexing” of crawled information around the existing structure of Google’s Knowledge Graph, and the concept of “Portable-prioritized Organization of Information,” which favors information that is easy to lift and re-present in Google’s properties — Google describes these two things together as “Mobile-First Indexing.”

As SEOs, we need to remember that the web is getting bigger and bigger, which means that it’s getting harder to crawl. Users now expect Google to index and surface content instantly. But while webmasters and SEOs were building out more and more content in flat, crawlable HTML pages, the best parts of the web were moving towards more dynamic websites and web-apps. These new assets were driven by databases of information on a server, populating their information into websites with JavaScript, XML or C++, rather than flat, easily crawlable HTML. 

For many years, this was a major problem for Google, and thus, it was a problem for SEOs and webmasters. Ultimately though, it was the more complex code that forced Google to shift to this more advanced, entity-based system of indexing — something we at MobileMoxie call Fraggles and Fraggle-Based Indexing, and the credit goes to JavaScript’s “Fragments.”

Fraggles represent individual parts (fragments) of a page for which Google overlayed a “handle” or “jump-link” (aka named-anchor, bookmark, etc.) so that a click on the result takes the users directly to the part of the page where the relevant fragment of text is located. These Fraggles are then organized around the relevant nodes on the Knowledge Graph, so that the mapping of the relationships between different topics can be vetted, built-out, and maintained over time, but also so that the structure can be used and reused, internationally — even if different content is ranking. 

More than one Fraggle can rank for a page, and the format can vary from a text-link with a “Jump to” label, an unlabeled text link, a site-link carousel, a site-link carousel with pictures, or occasionally horizontal or vertical expansion boxes for the different items on a page.

The most notable thing about Fraggles is the automatic scrolling behavior from the SERP. While Fraggles are often linked to content that has an HTML or JavaScript jump-links, sometimes, the jump-links appear to be added by Google without being present in the code at all. This behavior is also prominently featured in AMP Featured Snippets, for which Google has the same scrolling behavior, but also includes Google’s colored highlighting — which is superimposed on the page — to show the part of the page that was displayed in the Featured Snippet, which allows the searcher to see it in context. I write about this more in the article: What the Heck are Fraggles.

How Fraggles & Fraggle-based indexing works with JavaScript

Google’s desire to index Native Apps and Web Apps, including single-page apps, has necessitated Google’s switch to indexing based on Fragments and Fraggles, rather than pages. In JavaScript, as well as in Native Apps, a “Fragment” is a piece of content or information that is not necessarily a full page. 

The easiest way for an SEO to think about a Fragment is within the example of an AJAX expansion box: The piece of text or information that is fetched from the server to populate the AJAX expander when clicked could be described as a Fragment. Alternatively, if it is indexed for Mobile-First Indexing, it is a Fraggle. 

It is no coincidence that Google announced the launch of Deferred JavaScript Rendering at roughly the same time as the public roll-out of Mobile-First Indexing without drawing-out the connection, but here it is: When Google can index fragments of information from web pages, web apps and native apps, all organized around the Knowledge Graph, the data itself becomes “portable” or “mobile-first.”

We have also recently discovered that Google has begun to index URLs with a # jump-link, after years of not doing so, and is reporting on them separately from the primary URL in Search Console. As you can see below from our data, they aren’t getting a lot of clicks, but they are getting impressions. This is likely because of the low average position. 

Before Fraggles and Fraggle-Based Indexing, indexing # URLs would have just resulted in a massive duplicate content problem and extra work indexing for Google. Now that Fraggle-based Indexing is in-place, it makes sense to index and report on # URLs in Search Console — especially for breaking up long, drawn-out JavaScript experiences like PWA’s and Single-Page-Apps that don’t have separate URLs, databases, or in the long-run, possibly even for indexing native apps without Deep Links. 

Why index fragments & Fraggles?

If you’re used to thinking of rankings with the smallest increment being a URL, this idea can be hard to wrap your brain around. To help, consider this thought experiment: How useful would it be for Google to rank a page that gave detailed information about all different kinds of fruits and vegetables? It would be easy for a query like “fruits and vegetables,” that’s for sure. But if the query is changed to “lettuce” or “types of lettuce,” then the page would struggle to rank, even if it had the best, most authoritative information. 

This is because the “lettuce” keywords would be diluted by all the other fruit and vegetable content. It would be more useful for Google to rank the part of the page that is about lettuce for queries related to lettuce, and the part of the page about radishes well for queries about radishes. But since users don’t want to scroll through the entire page of fruits and vegetables to find the information about the particular vegetable they searched for, Google prioritizes pages with keyword focus and density, as they relate to the query. Google will rarely rank long pages that covered multiple topics, even if they were more authoritative.

With featured snippets, AMP featured snippets, and Fraggles, it’s clear that Google can already find the important parts of a page that answers a specific question — they’ve actually been able to do this for a while. So, if Google can organize and index content like that, what would the benefit be in maintaining an index that was based only on per-pages statistics and ranking? Why would Google want to rank entire pages when they could rank just the best parts of pages that are most related to the query?

To address these concerns, historically, SEO’s have worked to break individual topics out into separate pages, with one page focused on each topic or keyword cluster. So, with our vegetable example, this would ensure that the lettuce page could rank for lettuce queries and the radish page could rank for radish queries. With each website creating a new page for every possible topic that they would like to rank for, there’s lot of redundant and repetitive work for webmasters. It also likely adds a lot of low-quality, unnecessary pages to the index. Realistically, how many individual pages on lettuce does the internet really need, and how would Google determine which one is the best? The fact is, Google wanted to shift to an algorithm that focused less on links and more on topical authority to surface only the best content — and Google circumvents this with the scrolling feature in Fraggles.

Even though the effort to switch to Fraggle-based indexing, and organize the information around the Knowledge Graph, was massive, the long-term benefits of the switch far out-pace the costs to Google because they make Google’s system for flexible, monetizable and sustainable, especially as the amount of information and the number of connected devices expands exponentially. It also helps Google identify, serve and monetize new cross-device search opportunities, as they continue to expand. This includes search results on TV’s, connected screens, and spoken results from connected speakers. A few relevant costs and benefits are outlined below for you to contemplate, keeping Google’s long-term perspective in mind:

Why Fraggles and Fraggle-based indexing are important for PWAs

What also makes the shift to Fraggle-based Indexing relevant to SEOs is how it fits in with Google’s championing of Progressive Web Apps or AMP Progressive Web Apps, (aka PWAs and PWA-AMP websites/web apps). These types of sites have become the core focus of Google’s Chrome Developer summits and other smaller Google conferences.

From the perspective of traditional crawling and indexing, Google’s focus on PWAs is confusing. PWAs often feature heavy JavaScript and are still frequently built as Single-Page Apps (SPA’s), with only one or only a few URLs. Both of these ideas would make PWAs especially difficult and resource-intensive for Google to index in a traditional way — so, why would Google be so enthusiastic about PWAs? 

The answer is because PWA’s require ServiceWorkers, which uses Fraggles and Fraggle-based indexing to take the burden off crawling and indexing of complex web content.

In case you need a quick refresher: ServiceWorker is a JavaScript file — it instructs a device (mobile or computer) to create a local cache of content to be used just for the operation of the PWA. It is meant to make the loading of content much faster (because the content is stored locally) instead of just left on a server or CDN somewhere on the internet and it does so by saving copies of text and images associated with certain screens in the PWA. Once a user accesses content in a PWA, the content doesn’t need to be fetched again from the server. It’s a bit like browser caching, but faster — the ServiceWorker stores the information about when content expires, rather than storing it on the web. This is what makes PWAs seem to work offline, but it is also why content that has not been visited yet is not stored in the ServiceWorker.

ServiceWorkers and SEO

Most SEOs who understand PWAs understand that a ServiceWorker is for caching and load time, but they may not understand that it is likely also for indexing. If you think about it, ServiceWorkers mostly store the text and images of a site, which is exactly what the crawler wants. A crawler that uses Deferred JavaScript Rendering could go through a PWA and simulate clicking on all the links and store static content using the framework set forth in the ServiceWorker. And it could do this without always having to crawl all the JavaScript on the site, as long as it understood how the site was organized, and that organization stayed consistent. 

Google would also know exactly how often to re-crawl, and therefore could only crawl certain items when they were set to expire in the ServiceWorker cache. This saves Google a lot of time and effort, allowing them to get through or possibly skip complex code and JavaScript.

For a PWA to be indexed, Google requires webmasters to ‘register their app in Firebase,’ but they used to require webmasters to “register their ServiceWorker.” Firebase is the Google platform that allows webmasters to set up and manage indexing and deep linking for their native apps, chat-bots and, now, PWA’s

Direct communication with a PWA specialist at Google a few years ago revealed that Google didn’t crawl the ServiceWorker itself, but crawled the API to the ServiceWorker. It’s likely that when webmasters register their ServiceWorker with Google, Google is actually creating an API to the ServiceWorker, so that the content can be quickly and easily indexed and cached on Google’s servers. Since Google has already launched an Indexing API and appears to now favor API’s over traditional crawling, we believe Google will begin pushing the use of ServiceWorkers to improve page speed, since they can be used on non-PWA sites, but this will actually be to help ease the burden on Google to crawl and index the content manually.

Flat HTML may still be the fastest way to get web information crawled and indexed with Google. For now, JavaScript still has to be deferred for rendering, but it is important to recognize that this could change and crawling and indexing is not the only way to get your information to Google. Google’s Indexing API, which was launched for indexing time-sensitive information like job postings and live-streaming video, will likely be expanded to include different types of content. 

It’s important to remember that this is how AMP, Schema, and many other types of powerful SEO functionalities have started with a limited launch; beyond that, some great SEO’s have already tested submitting other types of content in the API and seen success. Submitting to APIs skips Google’s process of blindly crawling the web for new content and allows webmasters to feed the information to them directly.

It is possible that the new Indexing API follows a similar structure or process to PWA indexing. Submitted URLs can already get some kinds of content indexed or removed from Google’s index, usually in about an hour, and while it is only currently officially available for the two kinds of content, we expect it to be expanded broadly.

How will this impact SEO strategy?

Of course, every SEO wants to know how to leverage this speculative theory — how can we make the changes in Google to our benefit? 

The first thing to do is take a good, long, honest look at a mobile search result. Position #1 in the organic rankings is just not what it used to be. There’s a ton of engaging content that is often pushing it down, but not counting as an organic ranking position in Search Console. This means that you may be maintaining all your organic rankings while also losing a massive amount of traffic to SERP features like Knowledge Graph results, Featured Snippets, Google My Business, maps, apps, Found on the Web, and other similar items that rank outside of the normal organic results. 

These results, as well as Pay-per-Click results (PPC), are more impactful on mobile because they are stacked above organic rankings. Rather than being off to the side, as they might be in a desktop view of the search, they push organic rankings further down the results page. There has been some great reporting recently about the statistical and large-scale impact of changes to the SERP and how these changes have resulted in changes to user-behavior in search, especially from Dr. Pete Meyers, Rand Fishkin, and JumpTap.

Dr. Pete has focused on the increasing number of changes to the Google Algorithm recorded in his MozCast, which heated up at the end of 2016 when Google started working on Mobile-First Indexing, and again after it launched the Medic update in 2018. 

Rand, on the other hand, focused on how the new types of rankings are pushing traditional organic results down, resulting in less traffic to websites, especially on mobile. All this great data from these two really set the stage for a fundamental shift in SEO strategy as it relates to Mobile-First Indexing.

The research shows that Google re-organized its index to suit a different presentation of information — especially if they are able to index that information around an entity-concept in the Knowledge Graph. Fraggle-based Indexing makes all of the information that Google crawls even more portable because it is intelligently nested among related Knowledge Graph nodes, which can be surfaced in a variety of different ways. Since Fraggle-based Indexing focuses more on the meaningful organization of data than it does on pages and URLs, the results are a more “windowed” presentation of the information in the SERP. SEOs need to understand that search results are now based on entities and use-cases (think micro-moments), instead of pages and domains.

Google’s Knowledge Graph

To really grasp how this new method of indexing will impact your SEO strategy, you first have to understand how Google’s Knowledge Graph works. 

Since it is an actual “graph,” all Knowledge Graph entries (nodes) include both vertical and lateral relationships. For instance, an entry for “bread” can include lateral relationships to related topics like cheese, butter, and cake, but may also include vertical relationships like “standard ingredients in bread” or “types of bread.” 

Lateral relationships can be thought of as related nodes on the Knowledge Graph, and hint at “Related Topics” whereas vertical relationships point to a broadening or narrowing of the topic; which hints at the most likely filters within a topic. In the case of bread, a vertical relationship-up would be topics like “baking,” and down would include topics like “flour” and other ingredients used to make bread, or “sourdough” and other specific types of bread.

SEOs should note that Knowledge Graph entries can now include an increasingly wide variety of filters and tabs that narrow the topic information to benefit different types of searcher intent. This includes things like helping searchers find videos, books, images, quotes, locations, but in the case of filters, it can be topic-specific and unpredictable (informed by active machine learning). This is the crux of Google’s goal with Fraggle-based Indexing: To be able to organize the information of the web-based on Knowledge Graph entries or nodes, otherwise discussed in SEO circles as “entities.” 

Since the relationships of one entity to another remain the same, regardless of the language a person is speaking or searching in, the Knowledge Graph information is language-agnostic, and thus easily used for aggregation and machine learning in all languages at the same time. Using the Knowledge Graph as a cornerstone for indexing is, therefore, a much more useful and efficient means for Google to access and serve information in multiple languages for consumption and ranking around the world. In the long-term, it’s far superior to the previous method of indexing.

Examples of Fraggle-based indexing in the SERPs 

Knowledge Graph

Google has dramatically increased the number of Knowledge Graph entries and the categories and relationships within them. The build-out is especially prominent for topics for which Google has a high amount of structured data and information already. This includes topics like:

  • TV and Movies — from Google Play
  • Food and Recipe — from Recipe Schema, recipe AMP pages, and external food and nutrition databases 
  • Science and medicine — from trusted sources (like WebMD) 
  • Businesses — from Google My Business. 

Google is adding more and more nodes and relationships to their graph and existing entries are also being built-out with more tabs and carousels to break a single topic into smaller, more granular topics or type of information.

As you can see below, the build-out of the Knowledge Graph has also added to the number of filters and drill-down options within many queries, even outside of the Knowledge Graph. This increase can be seen throughout all of the Google properties, including Google My Business and Shopping, both of which we believe are now sections of the Knowledge Graph:


Google Search for ‘Blazers’ with Visual Filters at the Top for Shopping Oriented Queries

Google My Business (Business Knowledge Graph) with Filters for Information about Googleplex

Other similar examples include the additional filters and “Related Topics” results in Google Images, which we also believe to represent nodes on the Knowledge Graph:

0

 Advanced issues found

 


Google Images Increase in Filters & Inclusion of Related Topics Means that These Are Also Nodes on the Knowledge Graph

The Knowedge Graph is also being presented in a variety of different ways. Sometimes there’s a sticky navigation that persists at the top of the SERP, as seen in many media-oriented queries, and sometimes it’s broken up to show different information throughout the SERP, as you may have noticed in many of the local business-oriented search results, both shown below.


Media Knowledge Graph with Sticky Top Nav (Query for ‘Ferris Bueller’s Day Off’)

Local Business Knowledge Graph (GMB) With Information Split-up Throughout the SERP

Since the launch of Fraggle-based indexing is essentially a major Knowledge Graph build-out, Knowledge Graph results have also begun including more engaging content which makes it even less likely that users will click through to a website. Assets like playable video and audio, live sports scores, and location-specific information such as transportation information and TV time-tables can all be accessed directly in the search results. There’s more to the story, though. 

Increasingly, Google is also building out their own proprietary content by re-mixing existing information that they have indexed to create unique, engaging content like animated ‘AMP Stories’ which webmasters are also encouraged to build-out on their own. They have also started building a zoo of AR animals that can show as part of a Knowledge Graph result, all while encouraging developers to use their AR kit to build their own AR assets that will, no doubt, eventually be selectively incorporated into the Knowledge Graph too.


Google AR Animals in Knowledge Graph

Google AMP Stories Now Called ‘Life in Images’

SEO Strategy for Knowledge Graphs

Companies who want to leverage the Knowledge Graph should take every opportunity to create your own assets, like AR models and AMP Stories, so that Google will have no reason to do it. Beyond that, companies should submit accurate information directly to Google whenever they can. The easiest way to do this is through Google My Business (GMB). Whatever types of information are requested in GMB should be added or uploaded. If Google Posts are available in your business category, you should be doing Posts regularly, and making sure that they link back to your site with a call to action. If you have videos or photos that are relevant for your company, upload them to GMB. Start to think of GMB as a social network or newsletter — any assets that are shared on Facebook or Twitter can also be shared on Google Posts, or at least uploaded to the GMB account.

You should also investigate the current Knowledge Graph entries that are related to your industry, and work to become associated with recognized companies or entities in that industry. This could be from links or citations on the entity websites, but it can also include being linked by third-party lists that give industry-specific advice and recommendations, such as being listed among the top competitors in your industry (“Best Plumbers in Denver,” “Best Shoe Deals on the Web,” or “Top 15 Best Reality TV Shows”). Links from these posts also help but are not required — especially if you can get your company name on enough lists with the other top players. Verify that any links or citations from authoritative third-party sites like Wikipedia, Better Business Bureau, industry directories, and lists are all pointing to live, active, relevant pages on the site, and not going through a 301 redirect.

While this is just speculation and not a proven SEO strategy, you might also want to make sure that your domain is correctly classified in Google’s records by checking the industries that it is associated with. You can do so in Google’s MarketFinder tool. Make updates or recommend new categories as necessary. Then, look into the filters and relationships that are given as part of Knowledge Graph entries and make sure you are using the topic and filter words as keywords on your site.

Featured snippets 

Featured Snippets or “Answers” first surfaced in 2014 and have also expanded quite a bit, as shown in the graph below. It is useful to think of Featured Snippets as rogue facts, ideas or concepts that don’t have a full Knowledge Graph result, though they might actually be associated with certain existing nodes on the Knowledge Graph (or they could be in the vetting process for eventual Knowledge Graph build-out). 

Featured Snippets seem to surface when the information comes from a source that Google does not have an incredibly high level of trust for, like it does for Wikipedia, and often they come from third party sites that may or may not have a monetary interest in the topic — something that makes Google want to vet the information more thoroughly and may prevent Google from using it, if a less bias option is available.

Like the Knowledge Graph, Featured Snippets results have grown very rapidly in the past year or so, and have also begun to include carousels — something that Rob Bucci writes about extensively here. We believe that these carousels represent potentially related topics that Google knows about from the Knowledge Graph. Featured Snippets now look even more like mini-Knowledge Graph entries: Carousels appear to include both lateral and vertically related topics, and their appearance and maintenance seem to be driven by click volume and subsequent searches. However, this may also be influenced by aggregated engagement data for People Also Ask and Related Search data.

The build-out of Featured Snippets has been so aggressive that sometimes the answers that Google lifts are obviously wrong, as you can see in the example image below. It is also important to understand that Featured Snippet results can change from location to location and are not language-agnostic, and thus, are not translated to match the Search Language or the Phone Language settings. Google also does not hold themselves to any standard of consistency, so one Featured Snippet for one query might present an answer one way, and a similar query for the same fact could present a Featured Snippet with slightly different information. For instance, a query for “how long to boil an egg” could result in an answer that says “5 minutes” and a different query for “how to make a hard-boiled egg” could result in an answer that says “boil for 1 minute, and leave the egg in the water until it is back to room temperature.”


Featured Snippet with Carousel Featured

Snippet that is Wrong

The data below was collected by Moz and represents an average of roughly 10,000 that skews slightly towards ‘head’ terms.


This Data Was Collected by Moz & represents an average of roughly 10,000 that skews slightly towards ‘head’ terms

SEO strategy for featured snippets

All of the standard recommendations for driving Featured Snippets apply here. This includes making sure that you keep the information that you are trying to get ranked in a Featured Snippet clear, direct, and within the recommended character count. It also includes using simple tables, ordered lists, and bullets to make the data easier to consume, as well as modeling your content after existing Featured Snippet results in your industry.

This is still speculative, but it seems likely that the inclusion of Speakable Schema markup for things like “How To,” “FAQ,” and “Q&A” may also drive Featured Snippets. These kinds of results are specially designated as content that works well in a voice-search. Since Google has been adamant that there is not more than one index, and Google is heavily focused on improving voice-results from Google Assistant devices, anything that could be a good result in the Google Assistant, and ranks well, might also have a stronger chance at ranking in a Featured Snippet.

People Also Ask & Related Searches

Finally, the increased occurrence of “Related Searches” as well as the inclusion of People Also Ask (PAA) questions, just below most Knowledge Graph and Featured Snippet results, is undeniable. The Earl Tea screenshot shows that PAA’s along with Interesting Finds are both part of the Knowledge Graph too.

The graph below shows the steady increase in PAA’s. PAA results appear to be an expansion of Featured Snippets because once expanded, the answer to the question is displayed, with the citation below it. Similarly, some Related Search results also now include a result that looks like a Featured Snippet, instead of simply linking over to a different search result. You can now find ‘Related Searches’ throughout the SERP, often as part of a Knowledge Graph results, but sometimes also in a carousel in the middle of the SERP, and always at the bottom of the SERP — sometimes with images and expansion buttons to surface Featured Snippets within the Related Search results directly in the existing SERP.

Boxes with Related Searches are now also included with Image Search results. It’s interesting to note that Related Search results in Google Images started surfacing at the same time that Google began translating image Title Tags and Alt Tags. It coincides well with the concept that Entity-First Indexing, that Entities and Knowledge Graph are language-agnostic, and that Related Searches are somehow related to the Knowledge Graph.


This data was collected by Moz and represents an average of roughly 10,000 that skews slightly towards ‘head’ terms.


People Also Ask

Related Searches

SEO STRATEGY for PAA and related searches

Since PAAs and some Related Searches now appear to simply include Featured Snippets, driving Featured Snippet results for your site is also a strong strategy here. It often appears that PAA results include at least two versions of the same question, re-stated with a different language, before including questions that are more related to lateral and vertical nodes on the Knowledge Graph. If you include information on your site that Google thinks is related to the topic, based on Related Searches and PAA questions, it could help make your site appear relevant and authoritative.

Finally, it is crucial to remember that you don’t have a website to rank in Google now and SEO’s should consider non-website rankings as part of their job too. 

If a business doesn’t have a website, or if you just want to cover all the bases, you can let Google host your content directly — in as many places as possible. We have seen that Google-hosted content generally seems to get preferential treatment in Google search results and Google Discover, especially when compared to the decreasing traffic from traditional organic results. Google is now heavily focused on surfacing multimedia content, so anything that you might have previously created a new page on your website for should now be considered for a video.

Google My Business (GMB) is great for companies that don’t have websites, or that want to host their websites directly with Google. YouTube is great for videos, TV, video-podcasts, clips, animations, and tutorials. If you have an app, a book, an audio-book, a podcast, a movie, TV show, class or music, or PWA, you can submit that directly to GooglePlay (much of the video content in GooglePlay is now cross-populated in YouTube and YouTube TV, but this is not necessarily true of the other assets). This strategy could also include books in Google Books, flights in Google Flights, Hotels in Google Hotel listings, and attractions in Google Explore. It also includes having valid AMP code, since Google hosts AMP content, and includes Google News if your site is an approved provider of news.

Changes to SEO tracking for Fraggle-based indexing

The biggest problem for SEOs is the missing organic traffic, but it is also the fact that current methods of tracking organic results generally don’t show whether things like Knowledge Graph, Featured Snippets, PAA, Found on the Web, or other types of results are appearing at the top of the query or somewhere above your organic result. Position one in organic results is not what it used to be, nor is anything below it, so you can’t expect those rankings to drive the same traffic. If Google is going to be lifting and representing everyone’s content, the traffic will never arrive at the site and SEOs won’t know if their efforts are still returning the same monetary value. This problem is especially poignant for publishers, who have only been able to sell advertising on their websites based on the expected traffic that the website could drive.

The other thing to remember is that results differ — especially on mobile, which varies from device to device (generally based on screen size) but also can vary based on the phone IOS. They can also change significantly based on the location or the language settings of the phone, and they definitely do not always match with desktop results for the same query. Most SEO’s don’t know much about the reality of their mobile search results because most SEO reporting tools still focus heavily on desktop results, even though Google has switched to Mobile-First. 

As well, SEO tools generally only report on rankings from one location — the location of their servers — rather than being able to test from different locations. 

The only thing that good SEO’s can do to address this problem is to use tools like the MobileMoxie SERP Test to check what rankings look like on top keywords from all the locations where their users may be searching. While the free tool only provides results with one location at a time, subscribers can test search results in multiple locations, based on a service-area radius or based on an uploaded CSV of addresses. The tool has integrations with Google Sheets, and a connector with Data Studio, to help with SEO reporting, but APIs are also available, for deeper integrations in content editing tools, dashboards and for use within other SEO tools.

Conclusion

At MozCon 2017, I expressed my belief that the impact of Mobile-First Indexing requires a re-interpretation of the words “Mobile,” “First,” and “Indexing.” Re-defined in the context of Mobile-First Indexing, the words should be understood to mean “portable,” “preferred,” and “organization of information.” The potential of a shift to Fraggle-based indexing and the recent changes to the SERPs, especially in the past year, certainly seems to prove the accuracy of this theory. And though they have been in the works for more than two years, the changes to the SERP now seem to be rolling-out faster and are making the SERP unrecognizable from what it was only three or four years ago.

In this post, we described Fraggles and Fraggle-based indexing for SEO as a theory that speculates the true nature of the change to Mobile-First Indexing, how the index itself — and the units of indexing — may have changed to accommodate faster and more nuanced organization of information based on the Knowledge Graph, rather than simply links and URLs. We covered how Fraggles and Fraggle-based Indexing works, how it is related to JavaScript and PWA’s and what strategies SEOs can take to leverage it for additional exposure in the search results as well as how they can update their success tracking to account for all the variabilities that impact mobile search results.

SEOs need to consider the opportunities and change the way we view our overall indexing strategy, and our jobs as a whole. If Google is organizing the index around the Knowledge Graph, that makes it much easier for Google to constantly mention near-by nodes of the Knowledge Graph in “Related Searches” carousels, links from the Knowledge Graph, and topics in PAAs. It might also make it easier to believe that featured snippets are simply pieces of information being vetted (via Google’s click-crowdsourcing) for inclusion or reference in the Knowledge Graph.

Fraggles and Fraggled indexing re-frames the switch to Mobile-First Indexing, which means that SEOs and SEO tool companies need to start thinking mobile-first — i.e. the portability of their information. While it is likely that pages and domains still carry strong ranking signals, the changes in the SERP all seem to focus less on entire pages, and more on pieces of pages, similar to the ones surfaced in Featured Snippets, PAAs, and some Related Searches. If Google focuses more on windowing content and being an “answer engine” instead of a “search engine,” then this fits well with their stated identity, and their desire to build a more efficient, sustainable, international engine.

SEOs also need to find ways to serve their users better, by focusing more on the reality of the mobile SERP, and how much it can vary for real users. While Google may not call the smallest rankable units Fraggles, it is what we call them, and we think they are critical to the future of SEO.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

The New Moz Local Is Here! Can’t-Miss Highlights & How to Get Started

Posted by MiriamEllis

Last month we announced that the new Moz Local would be arriving soon. We’re so excited — it’s here! If you’re a current Moz Local customer, you may have already been exploring the new and improved platform this week! If not, signing up now will get you access to all the new goodies we have in store for you.

With any major change to a tool you use, it can take a bit for you to adjust. That’s why I wanted to write up a quick look at some of the highlights of the product, and from there encourage you to dig into our additional resources.

What are some key features to dig into?

Full location data management

More than 90% of purchases happen in physical stores. The first object of local SEO is ensuring that people searching online for what you offer:

  1. Encounter your business
  2. Access accurate information they can trust about it
  3. See the signals they’re looking for to choose you for a transaction

Moz Local meets this reality with active and continuous synching of location data so that you can grow your authority, visibility, and the public trust by managing your standard business information across partnered data aggregators, apps, sites, and databases. This is software centered around real-time location data and profile management, providing updates as quickly as partners can support them. And, with your authorized connection to Google and Facebook, updates you make to your business data on these two powerhouse platforms are immediate. Moz Local helps you master the online consumer encounter.

And, because business data changes over time, ongoing management of your online assets is essential. 80% of customers lose trust in a brand when its local business listings mislead them with incorrect information like wrong names, phone numbers, or hours of operation. No brand can afford to lose this trust! Moz Local’s data cleansing service delivers ongoing accuracy and proper formatting for successful submission to the platforms that matter most.

Finally, Moz Local supports the distribution of rich data beyond the basics. Give customers compelling reasons to choose your business over others by uploading photos, videos descriptions, social links, and more. Full control over these elements can greatly enhance customer encounters and improve conversions.

Automated duplicate deletion

Duplicate listings of a business location can turn profile management into a tangle, mislead consumers, dilute ranking strength, and sometimes even violate platform guidelines. But historically, detection and resolution of duplicates has been cumbersome and all but impossible to scale when handled manually.

One of the most exciting improvements you’ll experience with the new Moz Local is that duplicate workflows are now automated! Our next-level algorithmic technology will identify, confirm and permanently delete your duplicate listings in a fully automated fashion that requires no interaction or involvement on your part. This is a major development that will save local brands and agencies an amazing amount of time.

Deep Google and Facebook reporting & management

Logging in and out of multiple dashboards can be such a hassle, but with Moz Local, you’ll have insights about all of your locations and clients in a single space. Moz Local is now hooked up with Facebook management (hooray!) and we’ve deepened our Google My Business integration.

We’ll capture Facebook insights data for impressions and clicks for your location’s published Facebook content. And you’ll find it convenient that we surface impressions data for both Google Maps and Search. This means you’ll have easy access click data for the familiar attributes: clicks-for-directions, clicks-to-website and clicks-to-call, plus tracking of direct, indirect, and branded queries. Whether you’re dealing with just one listing or 100,000 of them, all the data will be at your fingertips.

One new feature I’m especially keen to share is the alerts you’ll receive every time a new photo is uploaded to your Google listing by a third party. Image spam is real, and awareness of public uploads of imagery that violates guidelines is part and parcel of reputation management.

Local dashboard

Our goal is to make your local SEO work as simple as possible, and very often, the at-a-glance summary in the new Moz Local dashboard will tell you all you need to know for routine check-ups. The default view of all the locations you manage can, of course, be easily filtered and segmented to look at specific clients or locations. Almost effortlessly, you’ll get a very quick overview of data like:

    • Average Profile Completeness
    • Locations requiring attention
    • Total listings in sync (sync is the new term for what we previously referred to as “published”)
    • Listings being updated
    • Listings requiring sync
    • Duplicate Reporting
    • Facebook Insights data
    • Google My Business Insights data

Profile suggestion engine

Who has time for guesswork when you’re trying to make the most of your online assets? Our powerful new profile suggestion engine tells you exactly what you what data you need to prove to reach maximum profile completeness.

Quickly drill down to a specific location. From there, Moz Local surfaces multiple fields (like long description, photos, opening hours, fax numbers, etc.) along with suggestions based on other verifiable online sources to improve consistency across the data publisher and partner network. Again, this is a big time-saver, especially if your agency has multiple clients or your enterprise has multiple locations to manage.

Email alerts, notifications, activity feed

Choose how you’d like to stay up-to-date on the status of your listings.

  • Every Moz Local dashboard contains an activity feed that continuously streams the latest information, updates, and alerts for all of your listings
  • Opt-in for email alerts if that’s your preferred method of notification. Digest emails are configurable to be sent on a weekly, monthly, or quarterly basis
  • Optional upgrade for email alerts for new reviews. If you upgrade, you’ll receive these notification daily, ensuring you aren’t missing complaints, praise and conversion opportunities

Review management

Google has revealed that about one-third of people looking for local business information are actually trying to find local business reviews. From the viewpoint of consumers, your online reviews are your brand’s reputation. Our own large-scale marketing survey found that 90% of respondents agree that reviews impact local rankings, but that 60% of participants lack a comprehensive review management strategy. The result is that platforms like Google have become mediums of unheard customer voices, neglected leads, and reputation damage.

The good news is that Moz Local customers have the option to upgrade their subscriptions to turn this unsustainable scenario completely around. Be alerted to incoming reviews on multiple platforms and respond to them quickly. See right away if a problem is emerging at one of your locations, necessitating in-store intervention, or if you’ve been hit with a review spam attack. And go far beyond this with insight into other types of customer sentiment, like photo uploads and Google Q&A.

The truth is, that in 2019 and in the foreseeable future, no business in a competitive market can afford to neglect public sentiment management, because it has become central to customer service. Every brand is in the business of customer service, but awareness, responsiveness, accountability, and action require strategy and the right tools. Let Moz Local help you take control of your priceless reputation.

Social posting

Manage the interactive aspects of your local business profiles with this optional upgrade. Share news, special offers, and questions & answers with customers on social platforms and in directories. This includes:

  • Engaging with customers on social media to share. News posts can be shared on Facebook and eligible directories. Offers can be posted in eligible directories. Questions & Answers can be posted to your Google Business Profile.
  • Publishing Posts instantly or scheduling them for a future date. And here’s something you’ll be excited to hear: you can submit the same post for multiple locations at once, create and save templates for posts, and edit/delete posts from the publishing dashboard!

In competitive local markets, transitioning from passive observation of online assets to interactive engagement with the public can set your brand apart.

What should my next steps in the new Moz Local be?

  1. Ensure that your location data and your profile are complete and accurate within the new Moz Local. Be sure to add in as much data as you can in the Basic Data, Rich Data, and Photos & Videos sections to reach high profile completeness. Doing so will ensure that your locations’ listings throughout the local search ecosystem are as informative as possible for potential customers. Moz Local acts as a “source of truth” for your location data and overwrites data on third party platforms like Google and Facebook, so be sure the data you’ve provided us is accurate before moving on to step two.
  2. Gain immediate insights into your local search presence by connecting your Google My Business and Facebook profiles. Once connected, these will begin to pull in tons of data, from impressions, to clicks, to queries.
  3. Once your profile is complete and Google My Business and Facebook profiles are connected, it’s time to sync your data to ensure that what you’ve provided to Moz Local is shared out to our network. Simply click the Sync button in the top right to push your information to our partners.

Where can I find more information?

I’m glad you’ve asked! Our resource center will be a great place to start. There, a user guide and video tutorial can show you the ropes, and you can also get registered for our upcoming webinar on June 25th at 10:00am PST:

Save my spot

The Help Hub has also been given a complete refresh with the new Moz Local. There you will find ample resources, FAQs, and descriptions of each area of the tool to dig into.

For any questions that you can’t find answers to, you can always reach out to our wonderful Help Team.

Download your free copy of the Moz Local User’s Manual

In the spirit of making things as delightfully simple for every customer, we’ve published a guide to smooth sailing in unfamiliar waters. In this short and sweet Moz Local User Manual, you’ll find:

  • A full visual key to the dashboard and all its functions
  • Instructions for adding locations
  • Instructions for editing locations
  • Instructions for cancelling locations
  • Instructions for managing reviews and social engagement
  • And other actionable info!

Who is the Moz Local User Manual for?

Anyone on your team who touches your Moz Local account should have a copy of this guide. It will level everyone up and cut down on team leads having to answer the same questions over and over again about basic tasks. Where appropriate, agencies may also want to send a copy over to clients who need clarity about why you’re using Moz Local as an integral part of local search marketing campaigns. We’ve written the manual in non-technical language, with step-by-step instructions any reader can follow.

Download the free guide

What’s next from Moz?

Expect a number of exciting new updates to continue rolling out — both in the new Moz Local tool as well as in other areas of our platform. As I mentioned before, it’s our serious plan to devote everything we’ve got into putting the power of local SEO into your hands. Keep an eye out for more to come from Moz to support your local search marketing.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

SEO & Progressive Web Apps: Looking to the Future

Posted by tombennet

Practitioners of SEO have always been mistrustful of JavaScript.

This is partly based on experience; the ability of search engines to discover, crawl, and accurately index content which is heavily reliant on JavaScript has historically been poor. But it’s also habitual, born of a general wariness towards JavaScript in all its forms that isn’t based on understanding or experience. This manifests itself as dependence on traditional SEO techniques that have not been relevant for years, and a conviction that to be good at technical SEO does not require an understanding of modern web development.

As Mike King wrote in his post The Technical SEO Renaissance, these attitudes are contributing to “an ever-growing technical knowledge gap within SEO as a marketing field, making it difficult for many SEOs to solve our new problems”. They also put SEO practitioners at risk of being left behind, since too many of us refuse to explore – let alone embrace – technologies such as Progressive Web Apps (PWAs), modern JavaScript frameworks, and other such advancements which are increasingly being seen as the future of the web.

In this article, I’ll be taking a fresh look at PWAs. As well as exploring implications for both SEO and usability, I’ll be showcasing some modern frameworks and build tools which you may not have heard of, and suggesting ways in which we need to adapt if we’re to put ourselves at the technological forefront of the web.

1. Recap: PWAs, SPAs, and service workers

Progressive Web Apps are essentially websites which provide a user experience akin to that of a native app. Features like push notifications enable easy re-engagement with your audience, while users can add their favorite sites to their home screen without the complication of app stores. PWAs can continue to function offline or on low-quality networks, and they allow a top-level, full-screen experience on mobile devices which is closer to that offered by native iOS and Android apps.

Best of all, PWAs do this while retaining – and even enhancing – the fundamentally open and accessible nature of the web. As suggested by the name they are progressive and responsive, designed to function for every user regardless of their choice of browser or device. They can also be kept up-to-date automatically and — as we shall see — are discoverable and linkable like traditional websites. Finally, it’s not all or nothing: existing websites can deploy a limited subset of these technologies (using a simple service worker) and start reaping the benefits immediately.

The spec is still fairly young, and naturally, there are areas which need work, but that doesn’t stop them from being one of the biggest advancements in the capabilities of the web in a decade. Adoption of PWAs is growing rapidly, and organizations are discovering the myriad of real-world business goals they can impact.

You can read more about the features and requirements of PWAs over on Google Developers, but two of the key technologies which make PWAs possible are:

  • App Shell Architecture: Commonly achieved using a JavaScript framework like React or Angular, this refers to a way of building single page apps (SPAs) which separates logic from the actual content. Think of the app shell as the minimal HTML, CSS, and JS your app needs to function; a skeleton of your UI which can be cached.
  • Service Workers: A special script that your browser runs in the background, separate from your page. It essentially acts as a proxy, intercepting and handling network requests from your page programmatically.

Note that these technologies are not mutually exclusive; the single page app model (brought to maturity with AngularJS in 2010) obviously predates service workers and PWAs by some time. As we shall see, it’s also entirely possible to create a PWA which isn’t built as a single page app. For the purposes of this article, however, we’re going to be focusing on the ‘typical’ approach to developing modern PWAs, exploring the SEO implications — and opportunities — faced by teams that choose to join the rapidly-growing number of organizations that make use of the two technologies described above.

We’ll start with the app shell architecture and the rendering implications of the single page app model.

2. The app shell architecture

URLs

In a nutshell, the app shell architecture involves aggressively caching static assets (the bare minimum of UI and functionality) and then loading the actual content dynamically, using JavaScript. Most modern JavaScript SPA frameworks encourage something resembling this approach, and the separation of logic and content in this way benefits both speed and usability. Interactions feel instantaneous, much like those on a native app, and data usage can be highly economical.

Credit to https://developers.google.com/web/fundamentals/architecture/app-shell

As I alluded to in the introduction, a heavy reliance on client-side JavaScript is a problem for SEO. Historically, many of these issues centered around the fact that while search crawlers require unique URLs to discover and index content, single page apps don’t need to change the URL for each state of the application or website (hence the phrase ‘single page’). The reliance on fragment identifiers — which aren’t sent as part of an HTTP request — to dynamically manipulate content without reloading the page was a major headache for SEO. Legacy solutions involved replacing the hash with a so-called hashbang (#!) and the _escaped_fragment_ parameter, a hack which has long-since been deprecated and which we won’t be exploring today.

Thanks to the HTML5 history API and pushState method, we now have a better solution. The browser’s URL bar can be changed using JavaScript without reloading the page, thereby keeping it in sync with the state of your application or site and allowing the user to make effective use of the browser’s ‘back’ button. While this solution isn’t a magic bullet — your server must be configured to respond to requests for these deep URLs by loading the app in its correct initial state — it does provide us with the tools to solve the problem of URLs in SPAs.

// Run this in your console to modify the URL in your
// browser - note that the page doesn't actually reload.
history.pushState(null, "Page 2", "/page2.html");

The bigger problem facing SEO today is actually much easier to understand: rendering content, namely when and how it gets done.

Rendering content

Note that when I refer to rendering here, I’m referring to the process of constructing the HTML. We’re focusing on how the actual content gets to the browser, not the process of drawing pixels to the screen.

In the early days of the web, things were simpler on this front. The server would typically return all the HTML that was necessary to render a page. Nowadays, however, many sites which utilize a single page app framework deliver only minimal HTML from the server and delegate the heavy lifting to the client (be that a user or a bot). Given the scale of the web this requires a lot of time and computational resource, and as Google made clear at its I/O conference in 2018, this poses a major problem for search engines:

“The rendering of JavaScript-powered websites in Google Search is deferred until Googlebot has resources available to process that content.”

On larger sites, this second wave of indexation can sometimes be delayed for several days. On top of this, you are likely to encounter a myriad of problems with crucial information like canonical tags and metadata being missed completely. I would highly recommend watching the video of Google’s excellent talk on this subject for a rundown of some of the challenges faced by modern search crawlers.

Google is one of the very few search engines that renders JavaScript at all. What’s more, it does so using a web rendering service that until very recently was based on Chrome 41 (released in 2015). Obviously, this has implications outside of just single page apps, and the wider subject of JavaScript SEO is a fascinating area right now. Rachel Costello’s recent white paper on JavaScript SEO is the best resource I’ve read on the subject, and it includes contributions from other experts like Bartosz Góralewicz, Alexis Sanders, Addy Osmani, and a great many more.

For the purposes of this article, the key takeaway here is that in 2019 you cannot rely on search engines to accurately crawl and render your JavaScript-dependent web app. If your content is rendered client-side, it will be resource-intensive for Google to crawl, and your site will underperform in search. No matter what you’ve heard to the contrary, if organic search is a valuable channel for your website, you need to make provisions for server-side rendering.

But server-side rendering is a concept which is frequently misunderstood…

“Implement server-side rendering”

This is a common SEO audit recommendation which I often hear thrown around as if it were a self-contained, easily-actioned solution. At best it’s an oversimplification of an enormous technical undertaking, and at worst it’s a misunderstanding of what’s possible/necessary/beneficial for the website in question. Server-side rendering is an outcome of many possible setups and can be achieved in many different ways; ultimately, though, we’re concerned with getting our server to return static HTML.

So, what are our options? Let’s break down the concept of server-side rendered content a little and explore our options. These are the high-level approaches which Google outlined at the aforementioned I/O conference:

  • Dynamic Rendering — Here, normal browsers get the ‘standard’ web app which requires client-side rendering while bots (such as Googlebot and social media services) are served with static snapshots. This involves adding an additional step onto your server infrastructure, namely a service which fetches your web app, renders the content, then returns that static HTML to bots based on their user agent (i.e. UA sniffing). Historically this was done with a service like PhantomJS (now deprecated and no longer developed), while today Puppeteer (headless Chrome) can perform a similar function. The main advantage is that it can often be bolted into your existing infrastructure.
  • Hybrid Rendering — This is Google’s long-term recommendation, and it’s absolutely the way to go for newer site builds. In short, everyone — bots and humans — get the initial view served as fully-rendered static HTML. Crawlers can continue to request URLs in this way and will get static content each time, while on normal browsers, JavaScript takes over after the initial page load. This is a great solution in theory, and comes with many other advantages for speed and usability too; more on that soon.

The latter is cleaner, doesn’t involve UA sniffing, and is Google’s long-term recommendation. It’s also worth clarifying that ‘hybrid rendering’ is not a single solution — it’s an outcome of many possible approaches to making static prerendered content available server-side. Let’s break down how a couple of ways such an outcome can be achieved.

Isomorphic/universal apps

This is one way in which you might achieve a ‘hybrid rendering’ setup. Isomorphic applications use JavaScript which runs on both the server and the client. This is made possible thanks to the advent of Node.js, which – among many other things – allows developers to write code which can run on the backend as well as in the browser.

Typically you’ll configure your framework (React, Angular Universal, whatever) to run on a Node server, prerendering some or all of the HTML before it’s sent to the client. Your server must, therefore, be configured to respond to deep URLs by rendering HTML for the appropriate page. In normal browsers, this is the point at which the client-side application will seamlessly take over. The server-rendered static HTML for the initial view is ‘rehydrated’ (brilliant term) by the browser, turning it back into a single page app and executing subsequent navigation events with JavaScript.

Done well, this setup can be fantastic since it offers the usability benefits of client-side rendering, the SEO advantages of server-side rendering, and a rapid first paint (even if Time to Interactive is often negatively impacted by the rehydration as JS kicks in). For fear of oversimplifying the task, I won’t go into too much more detail here, but the key point is that while isomorphic JavaScript / true server-side rendering can be a powerful solution, it is often enormously complex to set up.

So, what other options are there? If you can’t justify the time or expense of a full isomorphic setup, or if it’s simply overkill for what you’re trying to achieve, are there any other ways you can reap the benefits of the single page app model — and hybrid rendering setup — without sabotaging your SEO?

Prerendering/JAMstack

Having rendered content available server-side doesn’t necessarily mean that the rendering process itself needs to happen on the server. All we need is for rendered HTML to be there, ready to serve to the client; the rendering process itself can happen anywhere you like. With a JAMstack approach, rendering of your content into HTML happens as part of your build process.

I’ve written about the JAMstack approach before. By way of a quick primer, the term stands for JavaScript, APIs, and markup, and it describes a way of building complex websites without server-side software. The process of assembling a site from front-end component parts — a task a traditional site might achieve with WordPress and PHP — is executed as part of the build process, while interactivity is handled client-side using JavaScript and APIs.

Think of it this way: everything lives in your Git repository. Your content is stored as plain text markdown files (editable via a headless CMS or other API-based solution) and your page templates and assembly logic are written in Go, JavaScript, Ruby, or whatever language your preferred site generator happens to use. Your site can be built into static HTML on any computer with the appropriate set of command line tools before it’s hosted anywhere. The resulting set of easily-cached static files can often be securely hosted on a CDN for next to nothing.

I honestly think static site generators – or rather the principles and technologies which underpin them — are the future. There’s every chance I’m wrong about this, but the power and flexibility of the approach should be clear to anyone who’s used modern npm-based automation software like Gulp or Webpack to author their CSS or JavaScript. I’d challenge anyone to test the deep Git integration offered by specialist webhost Netlify in a real-world project and still think that the JAMstack approach is a fad.


The popularity of static site generators on GitHub, generated using https://stars.przemeknowak.com

The significance of a JAMstack setup to our discussion of single page apps and prerendering should be fairly obvious. If our static site generator can assemble HTML based on templates written in Liquid or Handlebars, why can’t it do the same with JavaScript?

There is a new breed of static site generator which does just this. Frequently powered by React or Vue.js, these programs allow developers to build websites using cutting-edge JavaScript frameworks and can easily be configured to output SEO-friendly, static HTML for each page (or ‘route’). Each of these HTML files is fully rendered content, ready for consumption by humans and bots, and serves as an entry point into a complete client-side application (i.e. a single page app). This is a perfect execution of what Google termed “hybrid rendering”, though the precise nature of the pre-rendering process sets it quite apart from an isomorphic setup.

A great example is GatsbyJS, which is built in React and GraphQL. I won’t go into too much detail, but I would encourage everyone who’s read this far to check out their homepage and excellent documentation. It’s a well-supported tool with a reasonable learning curve, an active community (a feature-packed v2.0 was released in September), an extensible plugin-based architecture, rich integrations with many CMSs, and it allows developers to utilize modern frameworks like React without sabotaging their SEO. There’s also Gridsome, based on VueJS, and React Static which — you guessed it — uses React.


Nike’s recent Just Do It campaign, which utilized the React-powered static site generator GatsbyJS and is hosted on Netlify.

Enterprise-level adoption of these platforms looks set to grow; GatsbyJS was used by Nike for their Just Do It campaign, Airbnb for their engineering site airbnb.io, and Braun have even used it to power a major e-commerce site. Finally, our friends at SEOmonitor used it to power their new website.

But that’s enough about single page apps and JavaScript rendering for now. It’s time we explored the second of our two key technologies underpinning PWAs. Promise you’ll stay with me to the end (haha, nerd joke), because it’s time to explore Service Workers.

3. Service Workers

First of all, I should clarify that the two technologies we’re exploring — SPAs and service workers — are not mutually exclusive. Together they underpin what we commonly refer to as a Progressive Web App, yes, but it’s also possible to have a PWA which isn’t an SPA. You could also integrate a service worker into a traditional static website (i.e. one without any client-side rendered content), which is something I believe we’ll see happening a lot more in the near future. Finally, service workers operate in tandem with other technologies like the Web App Manifest, something that my colleague Maria recently explored in more detail in her excellent guide to PWAs and SEO.

Ultimately, though, it is service workers which make the most exciting features of PWAs possible. They’re one of the most significant changes to the web platform in its history, and everyone whose job involves building, maintaining, or auditing a website needs to be aware of this powerful new set of technologies. If, like me, you’ve been eagerly checking Jake Archibald’s Is Service Worker Ready page for the last couple of years and watching as adoption by browser vendors has grown, you’ll know that the time to start building with service workers is now.

We’re going to explore what they are, what they can do, how to implement them, and what the implications are for SEO.

What can service workers do?

A service worker is a special kind of JavaScript file which runs outside of the main browser thread. It sits in-between the browser and the network, and its powers include:

  • Intercepting network requests and deciding what to do with them programmatically. The worker might go to network as normal, or it might rely solely on the cache. It could even fabricate an entirely new response from a variety of sources. That includes constructing HTML.
  • Preloading files during service worker installation. For SPAs this commonly includes the ‘app shell’ we discussed earlier, while simple static websites might opt to preload all HTML, CSS, and JavaScript, ensuring basic functionality is maintained while offline.
  • Handling push notifications, similar to a native app. This means websites can get permission from users to deliver notifications, then rely on the service worker to receive messages and execute them even when the browser is closed.
  • Executing background sync, deferring network operations until connectivity has improved. This might be an ‘outbox’ for a webmail service or a photo upload facility. No more “request failed, please try again later” – the service worker will handle it for you at an appropriate time.

The benefits of these kinds of features go beyond the obvious usability perks. As well as driving adoption of HTTPS across the web (all the major browsers will only register service workers on the secure protocol), service workers are transformative when it comes to speed and performance. They underpin new approaches and ideas like Google’s PRPL Pattern, since we can maximize caching efficiency and minimize reliance on the network. In this way, service workers will play a key role in making the web fast and accessible for the next billion web users.

So yeah, they’re an absolute powerhouse.

Implementing a service worker

Rather than doing a bad job of writing a basic tutorial here, I’m instead going to link to some key resources. After all, you are in the best position to know how deep your understanding of service workers needs to be.

The MDN Docs are a good place to learn more about service workers and their capabilities. If you’re already confident with the essentials of web development and enjoy a learn-by-doing approach, I’d highly recommend completing Google’s PWA training course. It includes a whole practical exercise on service workers, which is a great way to familiarize yourself with the basics. If ES6 and promises aren’t yet a part of your JavaScript repertoire, prepare for a baptism of fire.

They key thing to understand — and which you’ll realize very quickly once you start experimenting — is that service workers hand over an incredible level of control to developers. Unlike previous attempts to solve the connectivity conundrum (such as the ill-fated AppCache), service workers don’t enforce any specific patterns on your work; they’re a set of tools for you to write your own solutions to the problems you’re facing.

One consequence of this is that they can be very complex. Registering and installing a service worker is not a simple exercise, and any attempts to cobble one together by copy-pasting from StackExchange are doomed to failure (seriously, don’t do this). There’s no such thing as a ready-made service worker for your site — if you’re to author a suitable worker, you need to understand the infrastructure, architecture, and usage patterns of your website. Uncle Ben, ever the web development guru, said it best: with great power comes great responsibility.

One last thing: you’ll probably be surprised how many sites you visit are already using a service worker. Head to chrome://serviceworker-internals/ in Chrome or about:debugging#workers in Firefox to see a list.

Service workers and SEO

In terms of SEO implications, the most relevant thing about service workers is probably their ability to hijack requests and modify or fabricate responses using the Fetch API. What you see in ‘View Source’ and even on the Network tab is not necessarily a representation of what was returned from the server. It might be a cached response or something constructed by the service worker from a variety of different sources.

Credit: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API

Here’s a practical example:

  • Head to the GatsbyJS homepage
  • Hit the link to the ‘Docs’ page.
  • Right-click – View Source

No content, right? Just some inline scripts and styles and empty HTML elements — a classic client-side JavaScript app built in React. Even if you open the Network tab and refresh the page, the Preview and Response tabs will tell the same story. The actual content only appears in the Element inspector, because the DOM is being assembled with JavaScript.

Now run a curl request for the same URL (https://www.gatsbyjs.org/docs/), or fetch the page using Screaming Frog. All the content is there, along with proper title tags, canonicals, and everything else you might expect from a page rendered server-side. This is what a crawler like Googlebot will see too.

This is because the website uses hybrid rendering and a service worker — installed in your browser — is handling subsequent navigation events. There is no need for it to fetch the raw HTML for the Docs page from the server because the client-side application is already up-and-running – thus, View Source shows you what the service worker returned to the application, not what the network returned. Additionally, these pages can be reloaded while you’re offline thanks to the service worker’s effective use of the cache.

You can easily spot which responses came from the service worker using the Network tab — note the ‘from ServiceWorker’ line below.

On the Application tab, you can see the service worker which is running on the current page along with the various caches it has created. You can disable or bypass the worker and test any of the more advanced functionality it might be using. Learning how to use these tools is an extremely valuable exercise; I won’t go into details here, but I’d recommend studying Google’s Web Fundamentals tutorial on debugging service workers.

I’ve made a conscious effort to keep code snippets to a bare minimum in this article, but grant me this one. I’ve put together an example which illustrates how a simple service worker might use the Fetch API to handle requests and the degree of control which we’re afforded:

The result:

I hope that this (hugely simplified and non-production ready) example illustrates a key point, namely that we have extremely granular control over how resource requests are handled. In the example above we’ve opted for a simple try-cache-first, fall-back-to-network, fall-back-to-custom-page pattern, but the possibilities are endless. Developers are free to dictate how requests should be handled based on hostnames, directories, file types, request methods, cache freshness, and loads more. Responses – including entire pages – can be fabricated by the service worker. Jake Archibald explores some common methods and approaches in his Offline Cookbook.

The time to learn about the capabilities of service workers is now. The skillset required for modern technical SEO has a fair degree of overlap with that of a web developer, and today, a deep understanding of the dev tools in all major browsers – including service worker debugging – should be regarded as a prerequisite.

4. Wrapping Up

SEOs need to adapt

Until recently, it’s been too easy to get away with not understanding the consequences and opportunities posed by PWAs and service workers.

These were cutting-edge features which sat on the periphery of what was relevant to search marketing, and the aforementioned wariness of many SEOs towards JavaScript did nothing to encourage experimentation. But PWAs are rapidly on their way to becoming a norm, and it will soon be impossible to do an effective job without understanding the mechanics of how they function. To stay relevant as a technical SEO (or SEO Engineer, to borrow another term from Mike King), you should put yourself at the forefront of these kinds of paradigm-shifting developments. The technical SEO who is illiterate in web development is already an anachronism, and I believe that further divergence between the technical and content-driven aspects of search marketing is no bad thing. Specialize!

Upon learning that a development team is adopting a new JavaScript framework for a new site build, it’s not uncommon for SEOs to react with a degree of cynicism. I’m certainly guilty of joking about developers being attracted to the latest shiny technology or framework, and at how rapidly the world of JavaScript development seems to evolve, layer upon layer of abstraction and automation being added to what — from the outside — can often seem to be a leaning tower of a development stack. But it’s worth taking the time to understand why frameworks are chosen, when technologies are likely to start being used in production, and how these decisions will impact SEO.

Instead of criticizing 404 handling or internal linking of a single page app framework, for example, it would be far better to be able to offer meaningful recommendations which are grounded in an understanding of how they actually work. As Jono Alderson observed in his talk on the Democratization of SEO, contributions to open source projects are more valuable in spreading appreciation and awareness of SEO than repeatedly fixing the same problems on an ad-hoc basis.

Beyond SEO

One last thing I’d like to mention: PWAs are such a transformative set of technologies that they obviously have consequences which reach far beyond just SEO. Other areas of digital marketing are directly impacted too, and from my standpoint, one of the most interesting is analytics.

If your website is partially or fully functional while offline, have you adapted your analytics setup to account for this? If push notification subscriptions are a KPI for your website, are you tracking this as a goal? Remembering that service workers do not have access to the Window object, tracking these events is not possible with ‘normal’ tracking code. Instead, it’s necessary to configure your service worker to build hits using the Measurement Protocol, queue them if necessary, and send them directly to the Google Analytics servers.

This is a fascinating area that I’ve been exploring a lot lately, and you can read the first post in my series of articles on PWA analytics over on the Builtvisible blog.

That’s all from me for now! Thanks for reading. If you have any questions or comments, please leave a message below or drop me a line on Twitter @tomcbennet.

Many thanks to Oliver Mason and Will Nye for their feedback on an early draft of this article.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

The One-Hour Guide to SEO: Keyword Targeting & On-Page Optimization – Whiteboard Friday

Posted by randfish

We’ve covered strategy, keyword research, and how to satisfy searcher intent — now it’s time to tackle optimizing the webpage itself! In the fourth part of the One-Hour Guide to SEO, Rand offers up an on-page SEO checklist to start you off on your way towards perfectly optimized and keyword-targeted pages.

If you missed them, check out the other episodes in the series so far:

A picture of the whiteboard. The content is all detailed within the transcript below.

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans. Welcome to another edition of our special One-Hour Guide to SEO. We are now on Part IV – Keyword Targeting and On-Page Optimization. So hopefully, you’ve watched Part III, where we talked about searcher satisfaction, how to make sure searchers are happy with the page content that you create and the user experience that you build for them, as well as Part II, where we talked about keyword research and how to make sure that you are targeting the right words and phrases that searchers are actually looking for, that you think you can actually rank for, and that actually get real organic click-through rate, because Google’s zero-click searches are rising.

A depiction of a site with important on-page SEO elements highlighted, drawn on the whiteboard.

Now we’re into on-page SEO. So this is essentially taking the words and phrases that we know we want to rank for with the content that we know will help searchers accomplish their task. Now how do we make sure that the page is optimal for ranking in Google?

On-page SEO has evolved

Well, this is very different from the way it was years ago. A long time ago, and unfortunately many people still believe this to be true about SEO, it was: How do I stuff my keywords into all the right tags and places on the page? How do I take advantage of things like the meta keywords tag, which hasn’t been used in a decade, maybe two? How do I take advantage of putting all the words and phrases stuffed into my title, my URL, my description, my headline, my H2 through H7 tags, all these kinds of things?

Most of that does not matter, but some of it still does. Some of it is still important, and we need to run through what those are so that you give yourself the best possible chance for ranking.

The on-page SEO checklist

So what I’ve done here is created a sort of brief, on-page SEO checklist. This is not comprehensive, especially on the technical portion, because we’re saving that for Part V, the technical SEO section, which we will get into, of this Guide. In this checklist, some of the most important things are on here. 

☑ Descriptive, compelling, keyword-rich title element

Many of the most important things are on here, and those include things like a descriptive, compelling, keyword-rich but not stuffed title element, also called the page title or a title tag. So, for example, if I am a tool website, like toolsource.com — I made that domain name up, I assume it’s registered to somebody — and I want to rank for the “best online survey tools,” well, “The Best Online Survey Tools for 2019″ is a great title tag, and it’s very different from best online survey tools, best online survey software, best online survey software 2019. You’ve seen title tags like that. You’ve seen pages that contain stuff like that. That is no longer good SEO practices.

So we want that descriptive, compelling, makes me want to click. Remember that this title is also going to show up in the search results as the title of the snippet that your website appears in.

☑ Meta description designed to draw the click

Second, a meta description. This is still used by search engines, not for rankings though. Sort of think of it like ad text. You are drawing a click, or you’re attempting to draw the click. So what you want to do is have a description that tells people what’s on the page and inspires them, incites them, makes them want to click on your result instead of somebody else’s. That’s your chance to say, “Here’s why we’re valuable and useful.”

☑ Easy-to-read, sensible, short URL

An easy-to-read, sensible, short URL. For example, toolsource.com/reviews/best-online-surveys-2019. Perfect, very legible, very readable. I see that in the results, I think, “Okay, I know what that page is going to be.” I see that copied and pasted somewhere on the web, I think, “I know what’s going to be at that URL. That looks relevant to me.”

Or reviews.best-online-tools.info. Okay, well, first off, that’s a freaking terrible domain name. /oldseqs?ide=17 bunch of weird letters and tab detail equals this, and UTM parameter equals that. I don’t know what this is. I don’t know what all this means. By the way, having more than one or two URL parameters is very poorly correlated with and not recommended for trying to rank in search results. So you want to try and rewrite these to be more friendly, shorter, more sensible, and readable by a human being. That will help Google as well.

☑ First paragraph optimized for appearing in featured snippets

That first paragraph, the first paragraph of the content or the first few words of the page should be optimized for appearing in what Google calls featured snippets. Now, featured snippets is when I perform a search, for many queries, I don’t just see a list of pages. Sometimes I’ll see this box, often with an image and a bunch of descriptive text that’s drawn from the page, often from the first paragraph or two. So if you want to get that featured snippet, you have to be able to rank on page one, and you need to be optimized to answer the query right in your first paragraph. But this is an opportunity for you to be ranking in position three or four or five, but still have the featured snippet answer above all the other results. Awesome when you can do this in SEO, very, very powerful thing. Featured snippet optimization, there’s a bunch of resources on Moz’s website that we can point you to there too.

☑ Use the keyword target intelligently in…

☑ The headline

So if I’m trying to rank for “best online survey tools,” I would try and use that in my headline. Generally speaking, I like to have the headline and the title of the piece nearly the same or exactly the same so that when someone clicks on that title, they get the same headline on the page and they don’t get this cognitive dissonance between the two.

☑ The first paragraph

The first paragraph, we talked about. 

☑ The page content

The page’s content, you don’t want to have a page that’s talking about best online survey tools and you never mention online surveys. That would be a little weird. 

☑ Internal link anchors

An internal link anchor. So if other places on your website talk about online survey tools, you should be linking to this page. This is helpful for Google finding it, helpful for visitors finding it, and helpful to say this is the page that is about this on our website.

A whiteboard drawing depicting how to target one page with multiple keywords vs multiple pages targeting single keywords.

I do strongly recommend taking the following advice, which is we are no longer in a world where it makes sense to target one keyword per page. For example, best online survey tools, best online survey software, and best online survey tools 2019 are technically three unique keyword phrases. They have different search volumes. Slightly different results will show up for each of them. But it is no longer the case, whereas it was maybe a decade ago, that I would go create a page for each one of those separate things.

Instead, because these all share the same searcher intent, I want to go with one page, just a single URL that targets all the keywords that share the exact same searcher intent. If searchers are looking to find exactly the same thing but with slightly modified or slight variations in how they phrase things, you should have a page that serves all of those keywords with that same searcher intent rather than multiple pages that try to break those up, for a bunch of reasons. One, it’s really hard to get links to all those different pages. Getting links just period is very challenging, and you need them to rank.

Second off, the difference between those is going to be very, very subtle, and it will be awkward and seem to Google very awkward that you have these slight variations with almost the same thing. It might even look to them like duplicate or very similar or low-quality content, which can get you down-ranked. So stick to one page per set of shared intent keywords.

☑ Leverage appropriate rich snippet options

Next, you want to leverage appropriate rich snippet options. So, for example, if you are in the recipes space, you can use a schema markup for recipes to show Google that you’ve got a picture of the recipe and a cooking time and all these different details. Google offers this in a wide variety of places. When you’re doing reviews, they offer you the star ratings. Schema.org has a full list of these, and Google’s rich snippets markup page offers a bunch more. So we’ll point you to both of those as well.

☑ Images on the page employ…

Last, but certainly not least, because image search is such a huge portion of where Google’s search traffic comes from and goes to, it is very wise to optimize the images on the page. Image search traffic can now send significant traffic to you, and optimizing for images can sometimes mean that other people will find your images through Google images and then take them, put them on their own website and link back to you, which solves a huge problem. Getting links is very hard. Images is a great way to do it.

☑ Descriptive, keyword-rich filenames

The images on your page should employ descriptive, keyword-rich filenames, meaning if I have one for typeform, I don’t want it to be pick one, two or three. I want it to be typeformlogo or typeformsurveysoftware as the name of the file.

☑ Descriptive alt attributes

The alt attribute or alt tag is part of how you describe that for screen readers and other accessibility-focused devices, and Google also uses that text too. 

☑ Caption text (if appropriate)

Caption text, if that’s appropriate, if you have like a photograph and a caption describing it, you want to be descriptive of what’s actually in the picture.

☑ Stored in same domain and subdomain

These files, in order to perform well, they generally need to be hosted on the same domain and subdomain. If, for example, all your images are stored on an Amazon Web Services domain and you don’t bother rewriting or making sure that the domain looks like it’s on toolsource.com/photos or /images here, that can cause real ranking problems. Oftentimes you won’t perform at all in Google images because they don’t associate the image with the same domain. Same subdomain as well is preferable.

If you do all these things and you nail searcher intent and you’ve got your keyword research, you are ready to move on to technical SEO and link building and then start ranking. So we’ll see you for that next edition next week. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Rewriting the Beginner’s Guide to SEO, Chapter 7: Measuring, Prioritizing, & Executing SEO

Posted by BritneyMuller

It’s finally here, for your review and feedback: Chapter 7 of the new Beginner’s Guide to SEO, the last chapter. We cap off the guide with advice on how to measure, prioritize, and execute on your SEO. And if you missed them, check out the drafts of our outline, Chapter One, Chapter Two, Chapter Three, Chapter FourChapter Five, and Chapter Six for your reading pleasure. As always, let us know what you think of Chapter 7 in the comments!


Set yourself up for success.

They say if you can measure something, you can improve it.

In SEO, it’s no different. Professional SEOs track everything from rankings and conversions to lost links and more to help prove the value of SEO. Measuring the impact of your work and ongoing refinement is critical to your SEO success, client retention, and perceived value.

It also helps you pivot your priorities when something isn’t working.

Start with the end in mind

While it’s common to have multiple goals (both macro and micro), establishing one specific primary end goal is essential.

The only way to know what a website’s primary end goal should be is to have a strong understanding of the website’s goals and/or client needs. Good client questions are not only helpful in strategically directing your efforts, but they also show that you care.

Client question examples:

  1. Can you give us a brief history of your company?
  2. What is the monetary value of a newly qualified lead?
  3. What are your most profitable services/products (in order)?

Keep the following tips in mind while establishing a website’s primary goal, additional goals, and benchmarks:

Goal setting tips

  • Measurable: If you can’t measure it, you can’t improve it.
  • Be specific: Don’t let vague industry marketing jargon water down your goals.
  • Share your goals: Studies have shown that writing down and sharing your goals with others boosts your chances of achieving them.

Measuring

Now that you’ve set your primary goal, evaluate which additional metrics could help support your site in reaching its end goal. Measuring additional (applicable) benchmarks can help you keep a better pulse on current site health and progress.

Engagement metrics

How are people behaving once they reach your site? That’s the question that engagement metrics seek to answer. Some of the most popular metrics for measuring how people engage with your content include:

Conversion rate – The number of conversions (for a single desired action/goal) divided by the number of unique visits. A conversion rate can be applied to anything, from an email signup to a purchase to account creation. Knowing your conversion rate can help you gauge the return on investment (ROI) your website traffic might deliver.

In Google Analytics, you can set up goals to measure how well your site accomplishes its objectives. If your objective for a page is a form fill, you can set that up as a goal. When site visitors accomplish the task, you’ll be able to see it in your reports.

Time on page – How long did people spend on your page? If you have a 2,000-word blog post that visitors are only spending an average of 10 seconds on, the chances are slim that this content is being consumed (unless they’re a mega-speed reader). However, if a URL has a low time on page, that’s not necessarily bad either. Consider the intent of the page. For example, it’s normal for “Contact Us” pages to have a low average time on page.

Pages per visit – Was the goal of your page to keep readers engaged and take them to a next step? If so, then pages per visit can be a valuable engagement metric. If the goal of your page is independent of other pages on your site (ex: visitor came, got what they needed, then left), then low pages per visit are okay.

Bounce rate – “Bounced” sessions indicate that a searcher visited the page and left without browsing your site any further. Many people try to lower this metric because they believe it’s tied to website quality, but it actually tells us very little about a user’s experience. We’ve seen cases of bounce rate spiking for redesigned restaurant websites that are doing better than ever. Further investigation discovered that people were simply coming to find business hours, menus, or an address, then bouncing with the intention of visiting the restaurant in person. A better metric to gauge page/site quality is scroll depth.

Scroll depth – This measures how far visitors scroll down individual webpages. Are visitors reaching your important content? If not, test different ways of providing the most important content higher up on your page, such as multimedia, contact forms, and so on. Also consider the quality of your content. Are you omitting needless words? Is it enticing for the visitor to continue down the page? Scroll depth tracking can be set up in your Google Analytics.

Search traffic

Ranking is a valuable SEO metric, but measuring your site’s organic performance can’t stop there. The goal of showing up in search is to be chosen by searchers as the answer to their query. If you’re ranking but not getting any traffic, you have a problem.

But how do you even determine how much traffic your site is getting from search? One of the most precise ways to do this is with Google Analytics.

Using Google Analytics to uncover traffic insights

Google Analytics (GA) is bursting at the seams with data — so much so that it can be overwhelming if you don’t know where to look. This is not an exhaustive list, but rather a general guide to some of the traffic data you can glean from this free tool.

Isolate organic traffic – GA allows you to view traffic to your site by channel. This will mitigate any scares caused by changes to another channel (ex: total traffic dropped because a paid campaign was halted, but organic traffic remained steady).

Traffic to your site over time – GA allows you to view total sessions/users/pageviews to your site over a specified date range, as well as compare two separate ranges.

How many visits a particular page has received – Site Content reports in GA are great for evaluating the performance of a particular page — for example, how many unique visitors it received within a given date range.

Traffic from a specified campaign – You can use UTM (urchin tracking module) codes for better attribution. Designate the source, medium, and campaign, then append the codes to the end of your URLs. When people start clicking on your UTM-code links, that data will start to populate in GA’s “campaigns” report.

Click-through rate (CTR) – Your CTR from search results to a particular page (meaning the percent of people that clicked your page from search results) can provide insights on how well you’ve optimized your page title and meta description. You can find this data in Google Search Console, a free Google tool.

In addition, Google Tag Manager is a free tool that allows you to manage and deploy tracking pixels to your website without having to modify the code. This makes it much easier to track specific triggers or activity on a website.

Additional common SEO metrics

  • Domain Authority & Page Authority (DA/PA) – Moz’s proprietary authority metrics provide powerful insights at a glance and are best used as benchmarks relative to your competitors’ Domain Authority and Page Authority.
  • Keyword rankings – A website’s ranking position for desired keywords. This should also include SERP feature data, like featured snippets and People Also Ask boxes that you’re ranking for. Try to avoid vanity metrics, such as rankings for competitive keywords that are desirable but often too vague and don’t convert as well as longer-tail keywords.
  • Number of backlinks – Total number of links pointing to your website or the number of unique linking root domains (meaning one per unique website, as websites often link out to other websites multiple times). While these are both common link metrics, we encourage you to look more closely at the quality of backlinks and linking root domains your site has.

How to track these metrics

There are lots of different tools available for keeping track of your site’s position in SERPs, site crawl health, SERP features, and link metrics, such as Moz Pro and STAT.

The Moz and STAT APIs (among other tools) can also be pulled into Google Sheets or other customizable dashboard platforms for clients and quick at-a-glance SEO check-ins. This also allows you to provide more refined views of only the metrics you care about.

Dashboard tools like Data Studio, Tableau, and PowerBI can also help to create interactive data visualizations.

Evaluating a site’s health with an SEO website audit

By having an understanding of certain aspects of your website — its current position in search, how searchers are interacting with it, how it’s performing, the quality of its content, its overall structure, and so on — you’ll be able to better uncover SEO opportunities. Leveraging the search engines’ own tools can help surface those opportunities, as well as potential issues:

  • Google Search Console – If you haven’t already, sign up for a free Google Search Console (GSC) account and verify your website(s). GSC is full of actionable reports you can use to detect website errors, opportunities, and user engagement.
  • Bing Webmaster Tools – Bing Webmaster Tools has similar functionality to GSC. Among other things, it shows you how your site is performing in Bing and opportunities for improvement.
  • Lighthouse Audit – Google’s automated tool for measuring a website’s performance, accessibility, progressive web apps, and more. This data improves your understanding of how a website is performing. Gain specific speed and accessibility insights for a website here.
  • PageSpeed Insights – Provides website performance insights using Lighthouse and Chrome User Experience Report data from real user measurement (RUM) when available.
  • Structured Data Testing Tool – Validates that a website is using schema markup (structured data) properly.
  • Mobile-Friendly Test – Evaluates how easily a user can navigate your website on a mobile device.
  • Web.dev – Surfaces website improvement insights using Lighthouse and provides the ability to track progress over time.
  • Tools for web devs and SEOs – Google often provides new tools for web developers and SEOs alike, so keep an eye on any new releases here.

While we don’t have room to cover every SEO audit check you should perform in this guide, we do offer an in-depth Technical SEO Site Audit course for more info. When auditing your site, keep the following in mind:

Crawlability: Are your primary web pages crawlable by search engines, or are you accidentally blocking Googlebot or Bingbot via your robots.txt file? Does the website have an accurate sitemap.xml file in place to help direct crawlers to your primary pages?

Indexed pages: Can your primary pages be found using Google? Doing a site:yoursite.com OR site:yoursite.com/specific-page check in Google can help answer this question. If you notice some are missing, check to make sure a meta robots=noindex tag isn’t excluding pages that should be indexed and found in search results.

Check page titles & meta descriptions: Do your titles and meta descriptions do a good job of summarizing the content of each page? How are their CTRs in search results, according to Google Search Console? Are they written in a way that entices searchers to click your result over the other ranking URLs? Which pages could be improved? Site-wide crawls are essential for discovering on-page and technical SEO opportunities.

Page speed: How does your website perform on mobile devices and in Lighthouse? Which images could be compressed to improve load time?

Content quality: How well does the current content of the website meet the target market’s needs? Is the content 10X better than other ranking websites’ content? If not, what could you do better? Think about things like richer content, multimedia, PDFs, guides, audio content, and more.

Pro tip: Website pruning!

Removing thin, old, low-quality, or rarely visited pages from your site can help improve your website’s perceived quality. Performing a content audit will help you discover these pruning opportunities. Three primary ways to prune pages include:

  1. Delete the page (4XX): Use when a page adds no value (ex: traffic, links) and/or is outdated.
  2. Redirect (3XX): Redirect the URLs of pages you’re pruning when you want to preserve the value they add to your site, such as inbound links to that old URL.
  3. NoIndex: Use this when you want the page to remain on your site but be removed from the index.

Keyword research and competitive website analysis (performing audits on your competitors’ websites) can also provide rich insights on opportunities for your own website.

For example:

  • Which keywords are competitors ranking on page 1 for, but your website isn’t?
  • Which keywords is your website ranking on page 1 for that also have a featured snippet? You might be able to provide better content and take over that snippet.
  • Which websites link to more than one of your competitors, but not to your website?

Discovering website content and performance opportunities will help devise a more data-driven SEO plan of attack! Keep an ongoing list in order to prioritize your tasks effectively.

Prioritizing your SEO fixes

In order to prioritize SEO fixes effectively, it’s essential to first have specific, agreed-upon goals established between you and your client.

While there are a million different ways you could prioritize SEO, we suggest you rank them in terms of importance and urgency. Which fixes could provide the most ROI for a website and help support your agreed-upon goals?

Stephen Covey, author of The 7 Habits of Highly Effective People, developed a handy time management grid that can ease the burden of prioritization:


Source: Stephen Covey, The 7 Habits of Highly Effective People

Putting out small, urgent SEO fires might feel most effective in the short term, but this often leads to neglecting non-urgent important fixes. The not urgent & important items are ultimately what often move the needle for a website’s SEO. Don’t put these off.

SEO planning & execution

“Without strategy, execution is aimless. Without execution, strategy is useless.”
- Morris Chang

Much of your success depends on effectively mapping out and scheduling your SEO tasks. You can use free tools like Google Sheets to plan out your SEO execution (we have a free template here), but you can use whatever method works best for you. Some people prefer to schedule out their SEO tasks in their Google Calendar, in a kanban or scrum board, or in a daily planner.

Use what works for you and stick to it.

Measuring your progress along the way via the metrics mentioned above will help you monitor your effectiveness and allow you to pivot your SEO efforts when something isn’t working. Say, for example, you changed a primary page’s title and meta description, only to notice that the CTR for that page decreased. Perhaps you changed it to something too vague or strayed too far from the on-page topic — it might be good to try a different approach. Keeping an eye on drops in rankings, CTRs, organic traffic, and conversions can help you manage hiccups like this early, before they become a bigger problem.

Communication is essential for SEO client longevity

Many SEO fixes are implemented without being noticeable to a client (or user). This is why it’s essential to employ good communication skills around your SEO plan, the time frame in which you’re working, and your benchmark metrics, as well as frequent check-ins and reports.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

A New Domain Authority Is Coming Soon: What’s Changing, When, & Why

Posted by rjonesx.

Howdy Moz readers,

I’m Russ Jones, Principal Search Scientist at Moz, and I am excited to announce a fantastic upgrade coming next month to one of the most important metrics Moz offers: Domain Authority.

Domain Authority has become the industry standard for measuring the strength of a domain relative to ranking. We recognize that stability plays an important role in making Domain Authority valuable to our customers, so we wanted to make sure that the new Domain Authority brought meaningful changes to the table.

Learn more about the new DA

What’s changing?

What follows is an account of some of the technical changes behind the new Domain Authority and why they matter.

The training set:

Historically, we’ve relied on training Domain Authority against an unmanipulated, large set of search results. In fact, this has been the standard methodology across our industry. But we have found a way to improve upon it that fundamentally, from the ground up, makes Domain Authority more reliable.

The training algorithm:

Rather than relying on a complex linear model, we’ve made the switch to a neural network. This offers several benefits including a much more nuanced model which can detect link manipulation.

The model factors:

We have greatly improved upon the ranking factors behind Domain Authority. In addition to looking at link counts, we’ve now been able to integrate our proprietary Spam Score and complex distributions of links based on quality and traffic, along with a bevy of other factors.

The backbone:

At the heart of Domain Authority is the industry’s leading link index, our new Moz Link Explorer. With over 35 trillion links, our exceptional data turns the brilliant statistical work by Neil Martinsen-Burrell, Chas Williams, and so many more amazing Mozzers into a true industry standard.

What does this mean?

These fundamental improvements to Domain Authority will deliver a better, more trustworthy metric than ever before. We can remove spam, improve correlations, and, most importantly, update Domain Authority relative to all the changes that Google makes.

It means that you will see some changes to Domain Authority when the launch occurs. We staked the model to our existing Domain Authority which minimizes changes, but with all the improvements there will no doubt be some fluctuation in Domain Authority scores across the board.

What should we do?

Use DA as a relative metric, not an absolute one.

First, make sure that you use Domain Authority as a relative metric. Domain Authority is meaningless when it isn’t compared to other sites. What matters isn’t whether your site drops or increases — it’s whether it drops or increases relative to your competitors. When we roll out the new Domain Authority, make sure you check your competitors’ scores as well as your own, as they will likely fluctuate in a similar direction.

Know how to communicate changes with clients, colleagues, and stakeholders

Second, be prepared to communicate with your clients or webmasters about the changes and improvements to Domain Authority. While change is always disruptive, the new Domain Authority is better than ever and will allow them to make smarter decisions about search engine optimization strategies going forward.

Expect DA to keep pace with Google

Finally, expect that we will be continuing to improve Domain Authority. Just like Google makes hundreds of changes to their algorithm every year, we intend to make Domain Authority much more responsive to Google’s changes. Even when Google makes fundamental algorithm updates like Penguin or Panda, you can feel confident that Moz’s Domain Authority will be as relevant and useful as ever.

When is it happening?

We plan on rolling out the new Domain Authority on March 5th, 2019. We will have several more communications between now and then to help you and your clients best respond to the new Domain Authority, including a webinar on February 21st. We hope you’re as excited as we are and look forward to continuing to bring you the most reliable, cutting-edge metrics our industry has to offer.


Be sure to check out the resources we’ve prepared to help you acclimate to the change, including an educational whitepaper and a presentation you can download to share with your clients, team, and stakeholders:

Explore more resources here

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

More Articles

Posted in Latest NewsComments Off

Rewriting the Beginner’s Guide to SEO, Chapter 6: Link Building & Establishing Authority

Posted by BritneyMuller

In Chapter 6 of the new Beginner’s Guide to SEO, we’ll be covering the dos and don’ts of link building and ways your site can build its authority. If you missed them, we’ve got the drafts of our outline, Chapter One, Chapter Two, Chapter Three, Chapter Four, and Chapter Five for your reading pleasure. Be sure to let us know what you think of Chapter 6 in the comments!


Chapter 6: Link Building & Establishing Authority

Turn up the volume.

You’ve created content that people are searching for, that answers their questions, and that search engines can understand, but those qualities alone don’t mean it’ll rank. To outrank the rest of the sites with those qualities, you have to establish authority. That can be accomplished by earning links from authoritative websites, building your brand, and nurturing an audience who will help amplify your content.

Google has confirmed that links and quality content (which we covered back in Chapter 4) are two of the three most important ranking factors for SEO. Trustworthy sites tend to link to other trustworthy sites, and spammy sites tend to link to other spammy sites. But what is a link, exactly? How do you go about earning them from other websites? Let’s start with the basics.

What are links?

Inbound links, also known as backlinks or external links, are HTML hyperlinks that point from one website to another. They’re the currency of the Internet, as they act a lot like real-life reputation. If you went on vacation and asked three people (all completely unrelated to one another) what the best coffee shop in town was, and they all said, “Cuppa Joe on Main Street,” you would feel confident that Cuppa Joe is indeed the best coffee place in town. Links do that for search engines.

Since the late 1990s, search engines have treated links as votes for popularity and importance on the web.

Internal links, or links that connect internal pages of the same domain, work very similarly for your website. A high amount of internal links pointing to a particular page on your site will provide a signal to Google that the page is important, so long as it’s done naturally and not in a spammy way.

The engines themselves have refined the way they view links, now using algorithms to evaluate sites and pages based on the links they find. But what’s in those algorithms? How do the engines evaluate all those links? It all starts with the concept of E-A-T.

You are what you E-A-T

Google’s Search Quality Rater Guidelines put a great deal of importance on the concept of E-A-T — an acronym for expert, authoritative, and trustworthy. Sites that don’t display these characteristics tend to be seen as lower-quality in the eyes of the engines, while those that do are subsequently rewarded. E-A-T is becoming more and more important as search evolves and increases the importance of solving for user intent.

Creating a site that’s considered expert, authoritative, and trustworthy should be your guiding light as you practice SEO. Not only will it simply result in a better site, but it’s future-proof. After all, providing great value to searchers is what Google itself is trying to do.

E-A-T and links to your site

The more popular and important a site is, the more weight the links from that site carry. A site like Wikipedia, for example, has thousands of diverse sites linking to it. This indicates it provides lots of expertise, has cultivated authority, and is trusted among those other sites.

To earn trust and authority with search engines, you’ll need links from websites that display the qualities of E-A-T. These don’t have to be Wikipedia-level sites, but they should provide searchers with credible, trustworthy content.

  • Tip: Moz has proprietary metrics to help you determine how authoritative a site is: Domain Authority, Page Authority, and Spam Score. In general, you’ll want links from sites with a higher Domain Authority than your sites.

Followed vs. nofollowed links

Remember how links act as votes? The rel=nofollow attribute (pronounced as two words, “no follow”) allows you to link to a resource while removing your “vote” for search engine purposes.

Just like it sounds, “nofollow” tells search engines not to follow the link. Some engines still follow them simply to discover new pages, but these links don’t pass link equity (the “votes of popularity” we talked about above), so they can be useful in situations where a page is either linking to an untrustworthy source or was paid for or created by the owner of the destination page (making it an unnatural link).

Say, for example, you write a post about link building practices, and want to call out an example of poor, spammy link building. You could link to the offending site without signaling to Google that you trust it.

Standard links (ones that haven’t had nofollow added) look like this:

<a href="https://moz.com">I love Moz</a>

Nofollow link markup looks like this:

<a href="https://moz.com" rel="nofollow">I love Moz</a>

If follow links pass all the link equity, shouldn’t that mean you want only follow links?

Not necessarily. Think about all the legitimate places you can create links to your own website: a Facebook profile, a Yelp page, a Twitter account, etc. These are all natural places to add links to your website, but they shouldn’t count as votes for your website. (Setting up a Twitter profile with a link to your site isn’t a vote from Twitter that they like your site.)

It’s natural for your site to have a balance between nofollowed and followed backlinks in its link profile (more on link profiles below). A nofollow link might not pass authority, but it could send valuable traffic to your site and even lead to future followed links.

  • Tip: Use the MozBar extension for Google Chrome to highlight links on any page to find out whether they’re nofollow or follow without ever having to view the source code!

Your link profile

Your link profile is an overall assessment of all the inbound links your site has earned: the total number of links, their quality (or spamminess), their diversity (is one site linking to you hundreds of times, or are hundreds of sites linking to you once?), and more. The state of your link profile helps search engines understand how your site relates to other sites on the Internet. There are various SEO tools that allow you to analyze your link profile and begin to understand its overall makeup.

How can I see which inbound links point to my website?

Visit Moz Link Explorer and type in your site’s URL. You’ll be able to see how many and which websites are linking back to you.

What are the qualities of a healthy link profile?

When people began to learn about the power of links, they began manipulating them for their benefit. They’d find ways to gain artificial links just to increase their search engine rankings. While these dangerous tactics can sometimes work, they are against Google’s terms of service and can get a website deindexed (removal of web pages or entire domains from search results). You should always try to maintain a healthy link profile.

A healthy link profile is one that indicates to search engines that you’re earning your links and authority fairly. Just like you shouldn’t lie, cheat, or steal, you should strive to ensure your link profile is honest and earned via your hard work.

Links are earned or editorially placed

Editorial links are links added naturally by sites and pages that want to link to your website.

The foundation of acquiring earned links is almost always through creating high-quality content that people genuinely wish to reference. This is where creating 10X content (a way of describing extremely high-quality content) is essential! If you can provide the best and most interesting resource on the web, people will naturally link to it.

Naturally earned links require no specific action from you, other than the creation of worthy content and the ability to create awareness about it.

  • Tip: Earned mentions are often unlinked! When websites are referring to your brand or a specific piece of content you’ve published, they will often mention it without linking to it. To find these earned mentions, use Moz’s Fresh Web Explorer. You can then reach out to those publishers to see if they’ll update those mentions with links.

Links are relevant and from topically similar websites

Links from websites within a topic-specific community are generally better than links from websites that aren’t relevant to your site. If your website sells dog houses, a link from the Society of Dog Breeders matters much more than one from the Roller Skating Association. Additionally, links from topically irrelevant sources can send confusing signals to search engines regarding what your page is about.

  • Tip: Linking domains don’t have to match the topic of your page exactly, but they should be related. Avoid pursuing backlinks from sources that are completely off-topic; there are far better uses of your time.

Anchor text is descriptive and relevant, without being spammy

Anchor text helps tell Google what the topic of your page is about. If dozens of links point to a page with a variation of a word or phrase, the page has a higher likelihood of ranking well for those types of phrases. However, proceed with caution! Too many backlinks with the same anchor text could indicate to the search engines that you’re trying to manipulate your site’s ranking in search results.

Consider this. You ask ten separate friends at separate times how their day was going, and they each responded with the same phrase:

“Great! I started my day by walking my dog, Peanut, and then had a picante beef Top Ramen for lunch.”

That’s strange, and you’d be quite suspicious of your friends. The same goes for Google. Describing the content of the target page with the anchor text helps them understand what the page is about, but the same description over and over from multiple sources starts to look suspicious. Aim for relevance; avoid spam.

  • Tip: Use the “Anchor Text” report in Moz’s Link Explorer to see what anchor text other websites are using to link to your content.

Links send qualified traffic to your site

Link building should never be solely about search engine rankings. Esteemed SEO and link building thought leader Eric Ward used to say that you should build your links as though Google might disappear tomorrow. In essence, you should focus on acquiring links that will bring qualified traffic to your website — another reason why it’s important to acquire links from relevant websites whose audience would find value in your site, as well.

  • Tip: Use the “Referral Traffic” report in Google Analytics to evaluate websites that are currently sending you traffic. How can you continue to build relationships with similar types of websites?

Link building don’ts & things to avoid

Spammy link profiles are just that: full of links built in unnatural, sneaky, or otherwise low-quality ways. Practices like buying links or engaging in a link exchange might seem like the easy way out, but doing so is dangerous and could put all of your hard work at risk. Google penalizes sites with spammy link profiles, so don’t give in to temptation.

A guiding principle for your link building efforts is to never try to manipulate a site’s ranking in search results. But isn’t that the entire goal of SEO? To increase a site’s ranking in search results? And herein lies the confusion. Google wants you to earn links, not build them, but the line between the two is often blurry. To avoid penalties for unnatural links (known as “link spam”), Google has made clear what should be avoided.

Purchased links

Google and Bing both seek to discount the influence of paid links in their organic search results. While a search engine can’t know which links were earned vs. paid for from viewing the link itself, there are clues it uses to detect patterns that indicate foul play. Websites caught buying or selling followed links risk severe penalties that will severely drop their rankings. (By the way, exchanging goods or services for a link is also a form of payment and qualifies as buying links.)

Link exchanges / reciprocal linking

If you’ve ever received a “you link to me and I’ll link you you” email from someone you have no affiliation with, you’ve been targeted for a link exchange. Google’s quality guidelines caution against “excessive” link exchange and similar partner programs conducted exclusively for the sake of cross-linking, so there is some indication that this type of exchange on a smaller scale might not trigger any link spam alarms.

It is acceptable, and even valuable, to link to people you work with, partner with, or have some other affiliation with and have them link back to you.

It’s the exchange of links at mass scale with unaffiliated sites that can warrant penalties.

Low-quality directory links

These used to be a popular source of manipulation. A large number of pay-for-placement web directories exist to serve this market and pass themselves off as legitimate, with varying degrees of success. These types of sites tend to look very similar, with large lists of websites and their descriptions (typically, the site’s critical keyword is used as the anchor text to link back to the submittor’s site).

There are many more manipulative link building tactics that search engines have identified. In most cases, they have found algorithmic methods for reducing their impact. As new spam systems emerge, engineers will continue to fight them with targeted algorithms, human reviews, and the collection of spam reports from webmasters and SEOs. By and large, it isn’t worth finding ways around them.

If your site does get a manual penalty, there are steps you can take to get it lifted.

How to build high-quality backlinks

Link building comes in many shapes and sizes, but one thing is always true: link campaigns should always match your unique goals. With that said, there are some popular methods that tend to work well for most campaigns. This is not an exhaustive list, so visit Moz’s blog posts on link building for more detail on this topic.

Find customer and partner links

If you have partners you work with regularly, or loyal customers that love your brand, there are ways to earn links from them with relative ease. You might send out partnership badges (graphic icons that signify mutual respect), or offer to write up testimonials of their products. Both of those offer things they can display on their website along with links back to you.

Publish a blog

This content and link building strategy is so popular and valuable that it’s one of the few recommended personally by the engineers at Google. Blogs have the unique ability to contribute fresh material on a consistent basis, generate conversations across the web, and earn listings and links from other blogs.

Careful, though — you should avoid low-quality guest posting just for the sake of link building. Google has advised against this and your energy is better spent elsewhere.

Create unique resources

Creating unique, high quality resources is no easy task, but it’s well worth the effort. High quality content that is promoted in the right ways can be widely shared. It can help to create pieces that have the following traits:

Creating a resource like this is a great way to attract a lot of links with one page. You could also create a highly-specific resource — without as broad of an appeal — that targeted a handful of websites. You might see a higher rate of success, but that approach isn’t as scalable.

Users who see this kind of unique content often want to share it with friends, and bloggers/tech-savvy webmasters who see it will often do so through links. These high quality, editorially earned votes are invaluable to building trust, authority, and rankings potential.

Build resource pages

Resource pages are a great way to build links. However, to find them you’ll want to know some Advanced Google operators to make discovering them a bit easier.

For example, if you were doing link building for a company that made pots and pans, you could search for: cooking intitle:”resources” and see which pages might be good link targets.

This can also give you great ideas for content creation — just think about which types of resources you could create that these pages would all like to reference/link to.

Get involved in your local community

For a local business (one that meets its customers in person), community outreach can result in some of the most valuable and influential links.

  • Engage in sponsorships and scholarships.
  • Host or participate in community events, seminars, workshops, and organizations.
  • Donate to worthy local causes and join local business associations.
  • Post jobs and offer internships.
  • Promote loyalty programs.
  • Run a local competition.
  • Develop real-world relationships with related local businesses to discover how you can team up to improve the health of your local economy.

All of these smart and authentic strategies provide good local link opportunities.

Refurbish top content

You likely already know which of your site’s content earns the most traffic, converts the most customers, or retains visitors for the longest amount of time.

Take that content and refurbish it for other platforms (Slideshare, YouTube, Instagram, Quora, etc.) to expand your acquisition funnel beyond Google.

You can also dust off, update, and simply republish older content on the same platform. If you discover that a few trusted industry websites all linked to a popular resource that’s gone stale, update it and let those industry websites know — you may just earn a good link.

You can also do this with images. Reach out to websites that are using your images and not citing/linking back to you and ask if they’d mind including a link.

Be newsworthy

Earning the attention of the press, bloggers, and news media is an effective, time-honored way to earn links. Sometimes this is as simple as giving something away for free, releasing a great new product, or stating something controversial. Since so much of SEO is about creating a digital representation of your brand in the real world, to succeed in SEO, you have to be a great brand.

Be personal and genuine

The most common mistake new SEOs make when trying to build links is not taking the time to craft a custom, personal, and valuable initial outreach email. You know as well as anyone how annoying spammy emails can be, so make sure yours doesn’t make people roll their eyes.

Your goal for an initial outreach email is simply to get a response. These tips can help:

  • Make it personal by mentioning something the person is working on, where they went to school, their dog, etc.
  • Provide value. Let them know about a broken link on their website or a page that isn’t working on mobile.
  • Keep it short.
  • Ask one simple question (typically not for a link; you’ll likely want to build a rapport first).

Pro Tip:

Earning links can be very resource-intensive, so you’ll likely want to measure your success to prove the value of those efforts.

Metrics for link building should match up with the site’s overall KPIs. These might be sales, email subscriptions, page views, etc. You should also evaluate Domain and/or Page Authority scores, the ranking of desired keywords, and the amount of traffic to your content — but we’ll talk more about measuring the success of your SEO campaigns in Chapter 7.

Beyond links: How awareness, amplification, and sentiment impact authority

A lot of the methods you’d use to build links will also indirectly build your brand. In fact, you can view link building as a great way to increase awareness of your brand, the topics on which you’re an authority, and the products or services you offer.

Once your target audience knows about you and you have valuable content to share, let your audience know about it! Sharing your content on social platforms will not only make your audience aware of your content, but it can also encourage them to amplify that awareness to their own networks, thereby extending your own reach.

Are social shares the same as links? No. But shares to the right people can result in links. Social shares can also promote an increase in traffic and new visitors to your website, which can grow brand awareness, and with a growth in brand awareness can come a growth in trust and links. The connection between social signals and rankings seems indirect, but even indirect correlations can be helpful for informing strategy.

Trustworthiness goes a long way

For search engines, trust is largely determined by the quality and quantity of the links your domain has earned, but that’s not to say that there aren’t other factors at play that can influence your site’s authority. Think about all the different ways you come to trust a brand:

  • Awareness (you know they exist)
  • Helpfulness (they provide answers to your questions)
  • Integrity (they do what they say they will)
  • Quality (their product or service provides value; possibly more than others you’ve tried)
  • Continued value (they continue to provide value even after you’ve gotten what you needed)
  • Voice (they communicate in unique, memorable ways)
  • Sentiment (others have good things to say about their experience with the brand)

That last point is what we’re going to focus on here. Reviews of your brand, its products, or its services can make or break a business.

In your effort to establish authority from reviews, follow these review rules of thumb:

  • Never pay any individual or agency to create a fake positive review for your business or a fake negative review of a competitor.
  • Don’t review your own business or the businesses of your competitors. Don’t have your staff do so either.
  • Never offer incentives of any kind in exchange for reviews.
  • All reviews must be left directly by customers in their own accounts; never post reviews on behalf of a customer or employ an agency to do so.
  • Don’t set up a review station/kiosk in your place of business; many reviews stemming from the same IP can be viewed as spam.
  • Read the guidelines of each review platform where you’re hoping to earn reviews.

Be aware that review spam is a problem that’s taken on global proportions, and that violation of governmental truth-in-advertising guidelines has led to legal prosecution and heavy fines. It’s just too dangerous to be worth it. Playing by the rules and offering exceptional customer experiences is the winning combination for building both trust and authority over time.

Authority is built when brands are doing great things in the real-world, making customers happy, creating and sharing great content, and earning links from reputable sources.

In the next and final section, you’ll learn how to measure the success of all your efforts, as well as tactics for iterating and improving upon them. Onward!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Related Articles

Posted in Latest NewsComments Off

Full Funnel Testing: SEO & CRO Together – Whiteboard Friday

Posted by willcritchlow

Testing for only SEO or only CRO isn’t always ideal. Some changes result in higher conversions and reduced site traffic, for instance, while others may rank more highly but convert less well. In today’s Whiteboard Friday, we welcome Will Critchlow as he demonstrates a method of testing for both your top-of-funnel SEO changes and your conversion-focused CRO changes at once.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hi, everyone. Welcome to another Whiteboard Friday. My name is Will Critchlow, one of the founders at Distilled. If you’ve been following what I’ve been writing and talking about around the web recently, today’s topic may not surprise you that much. I’m going to be talking about another kind of SEO testing.

Over at Distilled, we’ve been investing pretty heavily in building out our capability to do SEO tests and in particular built our optimization delivery network, which has let us do a new kind of SEO testing that hasn’t been previously available to most of our clients. Recently we’ve been working on a new enhancement to this, which is full funnel testing, and that’s what I want to talk about today.

So funnel testing is testing all the way through the funnel, from acquisition at the SEO end to conversion. So it’s SEO testing plus CRO testing together. I’m going to write a little bit more about some of the motivation for this. But, in a nutshell, it essentially boils down to the fact that it is perfectly possible, in fact we’ve seen in the wild cases of tests that win in SEO terms and lose in CRO terms or vice versa.

In other words, tests that maybe you make a change and it converts better, but you lose organic search traffic. Or the other way around, it ranks better, but it converts less well. If you’re only testing one, which is common — I mean most organizations are only testing the conversion rate side of things — it’s perfectly possible to have a winning test, roll it out, and do worse.

CRO testing

So let’s step back a little bit. A little bit of a primer. Conversion rate optimization testing works in an A/B split kind of way. You can test on a single page, if you want to, or a site section. The way it works is you split your audience. So your audience is split. Some of your audience gets one version of the page, and the rest of the audience gets a different version.

Then you can compare the conversion rate among the group who got the control and the group who got the variant. That’s very straightforward. Like I say, it can happen on a single page or across an entire site. SEO testing, a little bit newer. The way this works is you can’t split the audience, because we care very much about the search engine spiders in this case. For the purposes of this consideration, there’s essentially only one Googlebot. So you couldn’t put Google in Class A or Class B here and expect to get anything meaningful.

SEO testing

So the way that we do an SEO test is we actually split the pages. To do this, you need a substantial site section. So imagine, for example, an e-commerce website with thousands of products. You might have a hypothesis of something that will help those product pages perform better. You take your hypothesis and you only apply it to some of the pages, and you leave some of the pages unchanged as a control.

Then, crucially, search engines and users see the same experience. There’s no cloaking going on. There’s no duplication of content. You simply change some pages and not change others. Then you apply kind of advanced mathematical, statistical analysis trying to figure out do these pages get statistically more organic search traffic than we think they would have done if we hadn’t made this change. So that’s how an SEO test works.

Now, as I said, the problem that we are trying to tackle here is it’s really plausible, despite Google’s best intentions to do what’s right for users, it’s perfectly plausible that you can have a test that ranks better but converts less well or vice versa. We’ve seen this with, for example, removing content from a page. Sometimes having a cleaner, simpler page can convert better. But maybe that was where the keywords were and maybe that was helping the page rank. So we’re trying to avoid those kinds of situations.

Full funnel testing

That’s where full funnel testing comes in. So I want to just run through how you run a full funnel test. What you do is you first of all set it up in the same way as an SEO test, because we’re essentially starting with SEO at the top of the funnel. So it’s set up exactly the same way.

Some pages are unchanged. Some pages get the hypothesis applied to them. As far as Google is concerned, that’s the end of the story, because on any individual request to these pages that’s what we serve back. But the critically important thing here is I’ve got my little character. This is a human browser performs a search, “What do badgers eat?”

This was one of our silly examples that we came up with on one of our demo sites. The user lands on this page here. What we do is we then set a cookie. This is a cookie. This user then, as they navigate around the site, no matter where they go within this site section, they get the same treatment, either the control or the variant. They get the same treatment across the entire site section. This is more like the conversion rate test here.

Googlebot = stateless requests

So what I didn’t show in this diagram is if you were running this test across a site section, you would cookie this user and make sure that they always saw the same treatment no matter where they navigated around the site. So because Googlebot is making stateless requests, in other words just independent, one-off requests for each of these of these pages with no cookie set, Google sees the split.

Evaluate SEO test on entrances

Users get whatever their first page impression looks like. They then get that treatment applied across the entire site section. So what we can do then is we can evaluate independently the performance in search, evaluate that on entrances. So do we get significantly more entrances to the variant pages than we would have expected if we hadn’t applied a hypothesis to them?

That tells us the uplift from an SEO perspective. So maybe we say, “Okay, this is plus 11% in organic traffic.” Well, great. So in a vacuum, all else being equal, we’d love to roll out this test.

Evaluate conversion rate on users

But before we do that, what we can do now is we can evaluate the conversion rate, and we do that based on user metrics. So these users are cookied.

We can also set an analytics tag on them and say, “Okay, wherever they navigate around, how many of them end up converting?” Then we can evaluate the conversion rate based on whether they saw treatment A or treatment B. Because we’re looking at conversion rate, the audience size doesn’t exactly have to be the same. So the statistical analysis can take care of that fact, and we can evaluate the conversion rate on a user-centric basis.

So then we maybe see that it’s -5% in conversion rate. We then need to evaluate, “Is this something we should roll out?” So step 1 is: Do we just roll it out? If it’s a win in both, then the answer is yes probably. If they’re in different directions, then there are couple things we can do. Firstly, we can evaluate the relative performance in different directions, taking care that conversion rate applies generally across all channels, and so a relatively small drop in conversion rate can be a really big deal compared to even an uplift in organic traffic, because the conversion rate is applying to all channels, not just your organic traffic channel.

But suppose that it’s a small net positive or a small net negative. What we can then do is we might get to the point that it’s a net positive and roll it out. Either way, we might then say, “What can we take from this? What can we actually learn?” So back to our example of the content. We might say, “You know what? Users like this cleaner version of the page with apparently less content on it.The search engines are clearly relying on that content to understand what this page is about. How do we get the best of both worlds?”

Well, that might be a question of a redesign, moving the layout of the page around a little bit, keeping the content on there, but maybe not putting it front and center to the user as they land right at the beginning. We can test those different things, run sequential tests, try and take the best of the SEO tests and the best of the CRO tests and get it working together and crucially avoid those situations where you think you’ve got a win, because your conversion rate is up, but you actually are about to crater your organic search performance.

We think this is going to just be the more data-driven we get, the more accountable SEO testing makes us, the more important it’s going to be to join these dots and make sure that we’re getting true uplifts on a net basis when we combine them. So I hope that’s been useful to some of you. Thank you for joining me on this week’s Whiteboard Friday. I’m Will Critchlow from Distilled.

Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

The SEO Cyborg: How to Resonate with Users & Make Sense to Search Bots

Posted by alexis-sanders

SEO is about understanding how search bots and users react to an online experience. As search professionals, we’re required to bridge gaps between online experiences, search engine bots, and users. We need to know where to insert ourselves (or our teams) to ensure the best experience for both users and bots. In other words, we strive for experiences that resonate with humans and make sense to search engine bots.

This article seeks to answer the following questions:

  • How do we drive sustainable growth for our clients?
  • What are the building blocks of an organic search strategy?

What is the SEO cyborg?

A cyborg (or cybernetic organism) is defined as “a being with both organic and
biomechatronic body parts, whose physical abilities are extended beyond normal human limitations by mechanical elements.”

With the ability to relate between humans, search bots, and our site experiences, the SEO cyborg is an SEO (or team) that is able to work seamlessly between both technical and content initiatives (whose skills are extended beyond normal human limitations) to support driving of organic search performance. An SEO cyborg is able to strategically pinpoint where to place organic search efforts to maximize performance.

So, how do we do this?

The SEO model

Like so many classic triads (think: primary colors, the Three Musketeers, Destiny’s Child [the canonical version, of course]) the traditional SEO model, known as the crawl-index-rank method, packages SEO into three distinct steps. At the same time, however, this model fails to capture the breadth of work that we SEOs are expected to do on a daily basis, and not having a functioning model can be limiting. We need to expand this model without reinventing the wheel.

The enhanced model involves adding in a rendering, signaling, and connection phase.

You might be wondering, why do we need these?:

  • Rendering: There is increased prevalence of JavaScript, CSS, imagery, and personalization.
  • Signaling: HTML <link> tags, status codes, and even GSC signals are powerful indicators that tell search engines how to process and understand the page, determine its intent, and ultimately rank it. In the previous model, it didn’t feel as if these powerful elements really had a place.
  • Connecting: People are a critical component of search. The ultimate goal of search engines is to identify and rank content that resonates with people. In the previous model, “rank” felt cold, hierarchical, and indifferent towards the end user.

All of this brings us to the question: how do we find success in each stage of this model?

Note: When using this piece, I recommend skimming ahead and leveraging those sections of the enhanced model that are most applicable to your business’ current search program.

The enhanced SEO model

Crawling

Technical SEO starts with the search engine’s ability to find a site’s webpages (hopefully efficiently).

Finding pages

Initially finding pages can happen a few ways, via:

  • Links (internal or external)
  • Redirected pages
  • Sitemaps (XML, RSS 2.0, Atom 1.0, or .txt)

Side note: This information (although at first pretty straightforward) can be really useful. For example, if you’re seeing weird pages popping up in site crawls or performing in search, try checking:

  • Backlink reports
  • Internal links to URL
  • Redirected into URL

Obtaining resources

The second component of crawling relates to the ability to obtain resources (which later becomes critical for rendering a page’s experience).

This typically relates to two elements:

  1. Appropriate robots.txt declarations
  2. Proper HTTP status code (namely 200 HTTP status codes)

Crawl efficiency

Finally, there’s the idea of how efficiently a search engine bot can traverse your site’s most critical experiences.

Action items:

  • Is site’s main navigation simple, clear, and useful?
  • Are there relevant on-page links?
  • Is internal linking clear and crawlable (i.e., <a href=”/”>)?
  • Is an HTML sitemap available?
    • Side note: Make sure to check the HTML sitemap’s next page flow (or behavior flow reports) to find where those users are going. This may help to inform the main navigation.
  • Do footer links contain tertiary content?
  • Are important pages close to root?
  • Are there no crawl traps?
  • Are there no orphan pages?
  • Are pages consolidated?
  • Do all pages have purpose?
  • Has duplicate content been resolved?
  • Have redirects been consolidated?
  • Are canonical tags on point?
  • Are parameters well defined?

Information architecture

The organization of information extends past the bots, requiring an in-depth understanding of how users engage with a site.

Some seed questions to begin research include:

  • What trends appear in search volume (by location, device)? What are common questions users have?
  • Which pages get the most traffic?
  • What are common user journeys?
  • What are users’ traffic behaviors and flow?
  • How do users leverage site features (e.g., internal site search)?

Rendering

Rendering a page relates to search engines’ ability to capture the page’s desired essence.

JavaScript

The big kahuna in the rendering section is JavaScript. For Google, rendering of JavaScript occurs during a second wave of indexing and the content is queued and rendered as resources become available.

Image based off of Google I/O ’18 presentation by Tom Greenway and John Mueller, Deliver search-friendly JavaScript-powered websites

As an SEO, it’s critical that we be able to answer the question — are search engines rendering my content?

Action items:

  • Are direct “quotes” from content indexed?
  • Is the site using <a href=”/”> links (not onclick();)?
  • Is the same content being served to search engine bots (user-agent)?
  • Is the content present within the DOM?
  • What does Google’s Mobile-Friendly Testing Tool’s JavaScript console (click “view details”) say?

Infinite scroll and lazy loading

Another hot topic relating to JavaScript is infinite scroll (and lazy load for imagery). Since search engine bots are lazy users, they won’t scroll to attain content.

Action items:

Ask ourselves – should all of the content really be indexed? Is it content that provides value to users?

  • Infinite scroll: a user experience (and occasionally a performance optimizing) tactic to load content when the user hits a certain point in the UI; typically the content is exhaustive.

Solution one (updating AJAX):

1. Break out content into separate sections

  • Note: The breakout of pages can be /page-1, /page-2, etc.; however, it would be best to delineate meaningful divides (e.g., /voltron, /optimus-prime, etc.)

2. Implement History API (pushState(), replaceState()) to update URLs as a user scrolls (i.e., push/update the URL into the URL bar)

3. Add the <link> tag’s rel=”next” and rel=”prev” on relevant page

Solution two (create a view-all page)
Note: This is not recommended for large amounts of content.

1. If it’s possible (i.e., there’s not a ton of content within the infinite scroll), create one page encompassing all content

2. Site latency/page load should be considered

  • Lazy load imagery is a web performance optimization tactic, in which images loads upon a user scrolling (the idea is to save time, downloading images only when they’re needed)
  • Add <img> tags in <noscript> tags
  • Use JSON-LD structured data
    • Schema.org “image” attributes nested in appropriate item types
    • Schema.org ImageObject item type

CSS

I only have a few elements relating to the rendering of CSS.

Action items:

  • CSS background images not picked up in image search, so don’t count on for important imagery
  • CSS animations not interpreted, so make sure to add surrounding textual content
  • Layouts for page are important (use responsive mobile layouts; avoid excessive ads)

Personalization

Although a trend in the broader digital exists to create 1:1, people-based marketing, Google doesn’t save cookies across sessions and thus will not interpret personalization based on cookies, meaning there must be an average, base-user, default experience. The data from other digital channels can be exceptionally useful when building out audience segments and gaining a deeper understanding of the base-user.

Action item:

  • Ensure there is a base-user, unauthenticated, default experience

Technology

Google’s rendering engine is leveraging Chrome 41. Canary (Chrome’s testing browser) is currently operating on Chrome 69. Using CanIUse.com, we can infer that this affects Google’s abilities relating to HTTP/2, service workers (think: PWAs), certain JavaScript, specific advanced image formats, resource hints, and new encoding methods. That said, this does not mean we shouldn’t progress our sites and experiences for users — we just must ensure that we use progressive development (i.e., there’s a fallback for less advanced browsers [and Google too ☺]).

Action items:

  • Ensure there’s a fallback for less advanced browsers

Indexing

Getting pages into Google’s databases is what indexing is all about. From what I’ve experienced, this process is straightforward for most sites.

Action items:

  • Ensure URLs are able to be crawled and rendered
  • Ensure nothing is preventing indexing (e.g., robots meta tag)
  • Submit sitemap in Google Search Console
  • Fetch as Google in Google Search Console

Signaling

A site should strive to send clear signals to search engines. Unnecessarily confusing search engines can significantly impact a site’s performance. Signaling relates to suggesting best representation and status of a page. All this means is that we’re ensuring the following elements are sending appropriate signals.

Action items:

  • <link> tag: This represents the relationship between documents in HTML.
    • Rel=”canonical”: This represents appreciably similar content.
      • Are canonicals a secondary solution to 301-redirecting experiences?
      • Are canonicals pointing to end-state URLs?
      • Is the content appreciably similar?
        • Since Google maintains prerogative over determining end-state URL, it’s important that the canonical tags represent duplicates (and/or duplicate content).
      • Are all canonicals in HTML?
      • Is there safeguarding against incorrect canonical tags?
    • Rel=”next” and rel=”prev”: These represent a collective series and are not considered duplicate content, which means that all URLs can be indexed. That said, typically the first page in the chain is the most authoritative, so usually it will be the one to rank.
    • Rel=”alternate”
      • media: typically used for separate mobile experiences.
      • hreflang: indicate appropriate language/country
        • The hreflang is quite unforgiving and it’s very easy to make errors.
        • Ensure the documentation is followed closely.
        • Check GSC International Target reports to ensure tags are populating.
  • HTTP status codes can also be signals, particularly the 304, 404, 410, and 503 status codes.
    • 304 – a valid page that simply hasn’t been modified
    • 404 – file not found
    • 410 – file not found (and it is gone, forever and always)
    • 503 – server maintenance

  • Google Search Console settings: Make sure the following reports are all sending clear signals. Occasionally Google decides to honor these signals.
    • International Targeting
    • URL Parameters
    • Data Highlighter
    • Remove URLs
    • Sitemaps

Rank

Rank relates to how search engines arrange web experiences, stacking them against each other to see who ends up on top for each individual query (taking into account numerous data points surrounding the query).

Two critical questions recur often when understanding ranking pages:

  • Does or could your page have the best response?
  • Are you or could you become semantically known (on the Internet and in the minds of users) for the topics? (i.e., are you worthy of receiving links and people traversing the web to land on your experience?)

On-page optimizations

These are the elements webmasters control. Off-page is a critical component to achieving success in search; however, in an idyllic world, we shouldn’t have to worry about links and/or mentions – they should come naturally.

Action items:

  • Textual content:
    • Make content both people and bots can understand
    • Answer questions directly
    • Write short, logical, simple sentences
    • Ensure subjects are clear (not to be inferred)
    • Create scannable content (i.e., make sure <h#> tags are an outline, use bullets/lists, use tables, charts, and visuals to delineate content, etc.)
    • Define any uncommon vocabulary or link to a glossary
  • Multimedia (images, videos, engaging elements):
    • Use imagery, videos, engaging content where applicable
    • Ensure that image optimization best practices are followed
  • Meta elements (<title> tags, meta descriptions, OGP, Twitter cards, etc.)
  • Structured data

Image courtesy of @abbynhamilton

  • Is content accessible?
    • Is there keyboard functionality?
    • Are there text alternatives for non-text media? Example:
      • Transcripts for audio
      • Images with alt text
      • In-text descriptions of visuals
    • Is there adequate color contrast?
    • Is text resizable?

Finding interesting content

Researching and identifying useful content happens in three formats:

  • Keyword and search landscape research
  • On-site analytic deep dives
  • User research

Visual modified from @smrvl via @DannyProl

Audience research

When looking for audiences, we need to concentrate high percentages (super high index rates are great, but not required). Push channels (particularly ones with strong targeting capabilities) do better with high index rates. This makes sense, we need to know that 80% of our customers have certain leanings (because we’re looking for base-case), not that five users over-index on a niche topic (these five niche-topic lovers are perfect for targeted ads).

Some seed research questions:

  • Who are users?
  • Where are they?
  • Why do they buy?
  • How do they buy?
  • What do they want?
  • Are they new or existing users?
  • What do they value?
  • What are their motivators?
  • What is their relationship w/ tech?
  • What do they do online?
  • Are users engaging with other brands?
    • Is there an opportunity for synergy?
  • What can we borrow from other channels?
    • Digital presents a wealth of data, in which 1:1, closed-loop, people-based marketing exists. Leverage any data you can get and find useful.

Content journey maps

All of this data can then go into creating a map of the user journey and overlaying relevant content. Below are a few types of mappings that are useful.

Illustrative user journey map

Sometimes when trying to process complex problems, it’s easier to break it down into smaller pieces. Illustrative user journeys can help with this problem! Take a single user’s journey and map it out, aligning relevant content experiences.

Funnel content mapping

This chart is deceptively simple; however, working through this graph can help sites to understand how each stage in the funnel affects users (note: the stages can be modified). This matrix can help with mapping who writers are talking to, their needs, and how to push them to the next stage in the funnel.

Content matrix

Mapping out content by intent and branding helps to visualize conversion potential. I find these extremely useful for prioritizing top-converting content initiatives (i.e., start with ensuring branded, transactional content is delivering the best experience, then move towards more generic, higher-funnel terms).

Overviews

Regardless of how the data is broken down, it’s vital to have a high-level view on the audience’s core attributes, opportunities to improve content, and strategy for closing the gap.

Connecting

Connecting is all about resonating with humans. Connecting is about understanding that customers are human (and we have certain constraints). Our mind is constantly filtering, managing, multitasking, processing, coordinating, organizing, and storing information. It is literally in our mind’s best interest to not remember 99% of the information and sensations that surround us (think of the lights, sounds, tangible objects, people surrounding you, and you’re still able to focus on reading the words on your screen — pretty incredible!).

To become psychologically sticky, we must:

  1. Get past the mind’s natural filter. A positive aspect of being a pull marketing channel is that individuals are already seeking out information, making it possible to intersect their user journey in a micro-moment.
  2. From there we must be memorable. The brain tends to hold onto what’s relevant, useful, or interesting. Luckily, the searcher’s interest is already piqued (even if they aren’t consciously aware of why they searched for a particular topic).

This means we have a unique opportunity to “be there” for people. This leads to a very simple, abstract philosophy: a great brand is like a great friend.

We have similar relationship stages, we interweave throughout each other’s lives, and we have the ability to impact happiness. This comes down to the question: Do your online customers use adjectives they would use for a friend to describe your brand?

Action items:

  • Is all content either relevant, useful, or interesting?
  • Does the content honor your user’s questions?
  • Does your brand have a personality that aligns with reality?
  • Are you treating users as you would a friend?
  • Do your users use friend-like adjectives to describe your brand and/or site?
  • Do the brand’s actions align with overarching goals?
  • Is your experience trust-inspiring?
  • https://?
  • Using Limited ads in layout?
  • Does the site have proof of claims?
  • Does the site use relevant reviews and testimonials?
  • Is contact information available and easily findable?
  • Is relevant information intuitively available to users?
  • Is it as easy to buy/subscribe as it is to return/cancel?
  • Is integrity visible throughout the entire conversion process and experience?
  • Does site have credible reputation across the web?

Ultimately, being able to strategically, seamlessly create compelling user experiences which make sense to bots is what the SEO cyborg is all about. ☺

tl;dr

  • Ensure site = crawlable, renderable, and indexable
  • Ensure all signals = clear, aligned
  • Answering related, semantically salient questions
  • Research keywords, the search landscape, site performance, and develop audience segments
  • Use audience segments to map content and prioritize initiatives
  • Ensure content is relevant, useful, or interesting
  • Treat users as friend, be worthy of their trust

This article is based off of my MozCon talk (with a few slides from the Appendix pulled forward). The full deck is available on Slideshare, and the official videos can be purchased here. Please feel free to reach out with any questions in the comments below or via Twitter @AlexisKSanders.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Advert