Tag Archive | "User"

5 Ways We Improved User Experience and Organic Reach on the New Moz Help Hub

Posted by jocameron

We’re proud to announce that we recently launched our brand-new Help Hub! This is the section of our site where we store all our guides and articles on how to use Moz Pro, Moz Local, and our research tools like Link Explorer.

Our Help Hub contains in-depth guides, quick and easy FAQs, and some amazing videos like this one. The old Help Hub served us very well over the years, but with time it became a bit dusty and increasingly difficult to update, in addition to looking a bit old and shabby. So we set out to rebuild it from scratch, and we’re already seeing some exciting changes in the search results — which will impact the way people self-serve when they need help using our tools.

I’m going to take you through 5 ways we improved the accessibility and reach of the Help Hub with our redesign. If you write software guides, work in customer experience, or simply write content that answers questions, then this post is worth a look.

If you’re thinking this is just a blatant excuse to inject some Mozzy news into an SEO-style blog post, then you’re right! But if you stick with me, I’ll make sure it’s more fun than switching between the same three apps on your phone with a scrunched-up look of despair etched into your brow. :)

Research and discovery

To understand what features we needed to implement, we decided to ask our customers how they search for help when they get stuck. The results were fascinating, and they helped us build a new Help Hub that serves both our customers and their behavior.

We discovered that 78% of people surveyed search for an answer first before reaching out:

This is a promising sign, and perhaps no surprise that people working in digital marketing and search are very much in the habit of searching for the answers to their questions. However, we also discovered that a staggering 36% couldn’t find a sufficient answer when they searched:

We also researched industry trends and dug into lots of knowledge bases and guides for popular tools like Slack and Squarespace. With this research in our back pockets we felt sure of our goal: to build a Help Hub that reduces the length of the question-search-answer journey and gets answers in front of people with questions.

Let’s not hang about — here are 5 ways we improved organic reach with our beautiful new Help Hub.

#1: Removing features that hide content

Tabbed content used to be a super cool way of organizing a long, wordy guide. Tabs digitally folded the content up like an origami swan. The tabs were all on one page and on one URL, and they worked like jump links to teleport users to that bit of content.

Our old Help Hub design had tabbed content that was hard to find and wasn’t being correctly indexed

The problem: searchers couldn’t easily find this content. There were two reasons for this: one, no one expected to have to click on tabs for discovery; and two (and most importantly), only the first page of content was being linked to in the SERPs. This decimated our organic reach. It was also tricky to link directly to the tabbed content. When our help team members were chatting with our lovely community, it was nearly impossible to quickly send a link to a specific piece of information in a tabbed guide.

Now, instead of having all that tabbed content stacked away like a Filofax, we’ve got beautifully styled and designed content that’s easy to navigate. We pulled previously hidden content on to unique pages that we could link people to directly. And at the top of the page, we added breadcrumbs so folks can orient themselves within the guide and continue self-serving answers to their heart’s content.

Our new design uses breadcrumbs to help folks navigate and keep finding answers

What did we learn?

Don’t hide your content. Features that were originally built in an effort to organize your content can become outdated and get between you and your visitors. Make your content accessible to both search engine crawlers and human visitors; your customer’s journey from question to answer will be more straightforward, making navigation between content more natural and less of a chore. Your customers and your help team will thank you.

#2: Proudly promote your FAQs

This follows on from the point above, and you have had a sneak preview in the screenshot above. I don’t mind repeating myself because our new FAQs more than warrant their own point, and I’ll tell you why. Because, dear reader, people search for their questions. Yup, it’s this new trend and gosh darn it the masses love it.

I mentioned in the point above that tabbed content was proving hard to locate and to navigate, and it wasn’t showing up in the search results. Now we’re displaying common queries where they belong, right at the top of the guides:

FAQ placement, before and after

This change comprises two huge improvements. Firstly, questions our customers are searching, either via our site or in Google, are proudly displayed at the top of our guides, accessible and indexable. Additionally, when our customers search for their queries (as we know they love to do), they now have a good chance of finding the exact answer just a click away.

Address common issues at the top of the page to alleviate frustration

I’ve run a quick search in Keyword Explorer and I can see we’re now in position 4 for this keyword phrase — we weren’t anywhere near that before.

SERP analysis from Keyword Explorer

This is what it looks like in the organic results — the answer is there for all to see.

Our FAQ answer showing up in the search results

And when people reach out? Now we can send links with the answers listed right at the top. No more messing about with jump links to tabbed content.

What did we learn?

In addition to making your content easily accessible, you should address common issues head-on. It can sometimes feel uncomfortable to highlight issues right at the top of the page, but you’ll be alleviating frustration for people encountering errors and reduce the workload for your help team.

You can always create specific troubleshooting pages to store questions and answers to common issues.

#3: Improve article quality and relevance to build trust

This involves using basic on-page optimization techniques when writing or updating your articles. This is bread and butter for seasoned SEOs, although often overlooked by creators of online guides and technical writers.

It’s no secret that we love to inject a bit of Mozzy fun into what we do, and the Help Hub is no exception. It’s a challenge that we relish: to explain the software in clear language that is, hopefully, a treat to explore. However, it turns out we’d become too preoccupied with fun, and our basic on-page optimization sadly lagged behind.

Mirroring customers’ language

Before we started work on our beautiful new Help Hub, we analyzed our most frequently asked questions and commonly searched topics on our site. Next, we audited the corresponding pages on the Help Hub. It was immediately clear that we could do a better job of integrating the language our customers were using to write in to us. By using relevant language in our Help Hub content, we’d be helping searchers find the right guides and videos before they needed to reach out.

Using the MozBar guide as an example, we tried a few different things to improve the CTR over a period of 12 months. We added more content, we updated the meta tags, we added jump links. Around 8 weeks after the guide was made more relevant and specific to searchers’ troubleshooting queries, we saw a massive uptick in traffic for that MozBar page, with pageviews increasing from around ~2.5k per month to ~10k between February 2018 and July 2018. Traffic from organic searches doubled.

Updates to the Help Hub content and the increased traffic over time from Google Analytics

It’s worth noting that traffic to troubleshooting pages can spike if there are outages or bugs, so you’ll want to track this over an 8–12 month period to get the full picture.

What we’re seeing in the chart above is a steady and consistent increase in traffic for a few months. In fact, we started performing too well, ranking for more difficult, higher-volume keywords. This wasn’t exactly what we wanted to achieve, as the content wasn’t relevant to people searching for help for any old plugin. As a result, we’re seeing a drop in August. There’s a sweet spot for traffic to troubleshooting guides. You want to help people searching for answers without ranking for more generic terms that aren’t relevant, which leads us to searcher intent.

Focused on searcher intent

If you had a chance to listen to Dr. Pete’s MozCon talk, you’ll know that while it may be tempting to try to rank well for head vanity keywords, it’s most helpful to rank for keywords where your content matches the needs and intent of the searcher.

While it may be nice to think our guide can rank for “SEO toolbar for chrome” (which we did for a while), we already have a nice landing page for MozBar that was optimized for that search.

When I saw a big jump in our organic traffic, I entered the MozBar URL into Keyword Explorer to hunt down our ranking keywords. I then added these keywords in my Moz Pro campaign to see how we performed over time.

You can see that after our big jump in organic traffic, our MozBar troubleshooting guide dropped 45 places right out of the top 5 pages for this keyword. This is likely because it wasn’t getting very good engagement, as people either didn’t click or swiftly returned to search. We’re happy to concede to the more relevant MozBar landing page.

The troubleshooting guide dropped in the results for this general SEO toolbar query, and rightly so

It’s more useful for our customers and our help team for this page to rank for something like “why wont moz chrome plugin work.” Though this keyword has slightly fewer searches, there we are in the top spot consistently week after week, ready to help.

We want to retain this position for queries that match the nature of the guide

10x content

Anyone who works in customer experience will know that supporting a free tool is a challenge, and I must say our help team does an outstanding job. But we weren’t being kind to ourselves. We found that we were repeating the same responses, day in and day out.

This is where 10x content comes into play. We asked ourselves a very important question: why are we replying individually to one hundred people when we can create content that helps thousands of people?

We tracked common queries and created a video troubleshooting guide. This gave people the hand-holding they required without having to supply it one-to-one, on demand.

The videos for our SEO tools that offer some form of free access attract high views and engagement as folks who are new to them level up.

Monthly video views for tools that offer some free access

To put this into context, if you add up the views every month for these top 4 videos, they outperform all the other 35 videos on our Help hub put together:

Video views for tools with some free access vs all the other 35 videos on the Help Hub

What did we learn?

By mirroring your customers’ language and focusing on searcher intent, you can get your content in front of people searching for answers before they need to reach out. If your team is answering the same queries daily, figure out where your content is lacking and think about what you can do in the way of a video or images to assist searchers when they get stuck.

Most SEO work doesn’t have an immediate impact, so track when you’ve made changes and monitor your traffic to draw correlations between visitors arriving on your guides and the changes you’ve made. Try testing updates on a portion of pages and tracking results. Then rolling out updates to the rest of your pages.

More traffic isn’t always a good thing, it could indicate an outage or issue with your tool. Analyzing traffic data is the start of the journey to understanding the needs of people who use your tools.

#4: Winning SERP features by reformatting article structure

While we ramped up our relevance, we also reviewed our guide structure ready for migration to the new Help Hub CMS. We took paragraphs of content and turned them into clearly labelled step-by-step guides.

Who is this helping? I’m looking at you, 36% of people who couldn’t find what they were looking for! We’re coming at you from two angles here: people who never found the page they were searching for, and people who did, but couldn’t digest the content.

Here is an example from our guide on adding keywords to Moz Pro. We started with blocks of paragraphed content interspersed with images. After reformatting, we have a video right at the top and then a numbered list which outlines the steps.

Before: text and images. After: clearly numbered step-by-step guides.

When researching the results for this blog post, I searched for a few common questions to see how we were looking in the search results. And what did I find? Just a lovely rich snippet with our newly formatted steps! Magic!

Our new rich snippet with the first 4 steps and a screenshot of our video

We’ve got all the things we want in a rich snippet: the first 4 steps with the “more items” link (hello, CTR!), a link to the article, and a screenshot of the video. On one hand, the image of the video looks kind of strange, but it also clearly labels it as a Moz guide, which could prove to be rather tempting for people clicking through from the results. We’ll watch how this performs over time to figure out if we can improve on it in future.

Let’s go briefly back in time and see what the original results were for this query, pre-reformatting. Not quite so helpful, now, is it?

Search results before we reformatted the guide

What did we learn?

By clearly arranging your guide’s content into steps or bullet points, you’re improving the readability for human visitors and for search engines, who may just take it and use it in a rich snippet. The easier it is for people to comprehend and follow the steps of a process, the more likely they are to succeed — and that must feel significantly better than wading through a wall of text.

#5: Helping people at the end of the guide

At some point, someone will be disappointed by the guide they ended up on. Maybe it doesn’t answer their question to their satisfaction. Maybe they ended up in the wrong place.

That’s why we have two new features at the end of our guides: Related Articles and Feedback buttons.

The end of the guides, before and after

Related Articles

Related Articles help people to continue to self-serve, honing in on more specific guides. I’m not saying that you’re going to buckle down and binge-read ALL the Moz help guides — I know it’s not exactly Netflix. But you never know — once you hit a guide on Keyword Lists, you may think to yourself, “Gosh, I also want to know how to port my lists over to my Campaign. Oh, and while I’m here, I’m going to check on my Campaign Settings. And ohh, a guide about setting up Campaigns for subdomains? Don’t mind if I do!” Guide lovers around the world, rejoice!

Feedback buttons

I know that feedback buttons are by no means a new concept in the world of guides. It seems like everywhere you turn there’s a button, a toggle, or a link to let some mysterious entity somewhere know how you felt about this, that, and the other.

Does anyone ever actually use this data? I wondered. The trick is to gather enough information that you can analyze trends and respond to feedback, but not so much that wading through it is a major time-wasting chore.

When designing this feature, our aim was to gather actionable feedback from the folks we’re looking to help. Our awesome design, UX, and engineering teams built us something pretty special that we know will help us keep improving efficiently, without any extra noise.

Our new feedback buttons gather the data we need from the people we want to hear from

To leave feedback on our guides, you have to be logged in to your Moz account, so we are sure we’re helping people who engage with our tools, simple but effective. Clicking “Yes, thank you!” ends the journey there, job done, no need for more information for us to sift through. Clicking “No, not really” opens up a feedback box to let us know how we can improve.

People are already happily sending through suggestions, which we can turn into content and FAQs in a very short space of time:

Comments from visitors on how we can improve our guides

If you find yourself on a guide that helps (or not so much), then please do let us know!

The end of an article isn’t the end of the line for us — we want to keep moving forward and building on our content and features.

What did we learn?

We discovered that we’re still learning! Feedback can be tough to stomach and laborious to analyze, so spend some time figuring out who you want to hear from and how you can process that information.


If you have any other ideas about what you’d like to see on the Help Hub, whether it’s a topic, an FAQ, or snazzy feature to help you find the answers to your questions, please do let us know in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

It’s time to embrace new strategies for apparel: Broadening tactics through user intent

Clothing retailers, listen up! Columnist Thomas Stern explains why apparel brands must evolve their strategies and measurement beyond just conversions.

The post It’s time to embrace new strategies for apparel: Broadening tactics through user intent appeared first on Search Engine Land.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

SearchCap: Pinterest Lens, SEO budgets & reviews user behavior

Below is what happened in search today, as reported on Search Engine Land and from other places across the web.

The post SearchCap: Pinterest Lens, SEO budgets & reviews user behavior appeared first on Search Engine Land.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

JavaScript & SEO: Making Your Bot Experience As Good As Your User Experience

Posted by alexis-sanders

Understanding JavaScript and its potential impact on search performance is a core skillset of the modern SEO professional. If search engines can’t crawl a site or can’t parse and understand the content, nothing is going to get indexed and the site is not going to rank.

The most important questions for an SEO relating to JavaScript: Can search engines see the content and grasp the website experience? If not, what solutions can be leveraged to fix this?


Fundamentals

What is JavaScript?

When creating a modern web page, there are three major components:

  1. HTML – Hypertext Markup Language serves as the backbone, or organizer of content, on a site. It is the structure of the website (e.g. headings, paragraphs, list elements, etc.) and defining static content.
  2. CSS – Cascading Style Sheets are the design, glitz, glam, and style added to a website. It makes up the presentation layer of the page.
  3. JavaScript – JavaScript is the interactivity and a core component of the dynamic web.

Learn more about webpage development and how to code basic JavaScript.

javacssseo.gif

Image sources: 1, 2, 3

JavaScript is either placed in the HTML document within <script> tags (i.e., it is embedded in the HTML) or linked/referenced. There are currently a plethora of JavaScript libraries and frameworks, including jQuery, AngularJS, ReactJS, EmberJS, etc.

JavaScript libraries and frameworks:

What is AJAX?

AJAX, or Asynchronous JavaScript and XML, is a set of web development techniques combining JavaScript and XML that allows web applications to communicate with a server in the background without interfering with the current page. Asynchronous means that other functions or lines of code can run while the async script is running. XML used to be the primary language to pass data; however, the term AJAX is used for all types of data transfers (including JSON; I guess “AJAJ” doesn’t sound as clean as “AJAX” [pun intended]).

A common use of AJAX is to update the content or layout of a webpage without initiating a full page refresh. Normally, when a page loads, all the assets on the page must be requested and fetched from the server and then rendered on the page. However, with AJAX, only the assets that differ between pages need to be loaded, which improves the user experience as they do not have to refresh the entire page.

One can think of AJAX as mini server calls. A good example of AJAX in action is Google Maps. The page updates without a full page reload (i.e., mini server calls are being used to load content as the user navigates).

Related image

Image source

What is the Document Object Model (DOM)?

As an SEO professional, you need to understand what the DOM is, because it’s what Google is using to analyze and understand webpages.

The DOM is what you see when you “Inspect Element” in a browser. Simply put, you can think of the DOM as the steps the browser takes after receiving the HTML document to render the page.

The first thing the browser receives is the HTML document. After that, it will start parsing the content within this document and fetch additional resources, such as images, CSS, and JavaScript files.

The DOM is what forms from this parsing of information and resources. One can think of it as a structured, organized version of the webpage’s code.

Nowadays the DOM is often very different from the initial HTML document, due to what’s collectively called dynamic HTML. Dynamic HTML is the ability for a page to change its content depending on user input, environmental conditions (e.g. time of day), and other variables, leveraging HTML, CSS, and JavaScript.

Simple example with a <title> tag that is populated through JavaScript:

HTML source

DOM

What is headless browsing?

Headless browsing is simply the action of fetching webpages without the user interface. It is important to understand because Google, and now Baidu, leverage headless browsing to gain a better understanding of the user’s experience and the content of webpages.

PhantomJS and Zombie.js are scripted headless browsers, typically used for automating web interaction for testing purposes, and rendering static HTML snapshots for initial requests (pre-rendering).


Why can JavaScript be challenging for SEO? (and how to fix issues)

There are three (3) primary reasons to be concerned about JavaScript on your site:

  1. Crawlability: Bots’ ability to crawl your site.
  2. Obtainability: Bots’ ability to access information and parse your content.
  3. Perceived site latency: AKA the Critical Rendering Path.

Crawlability

Are bots able to find URLs and understand your site’s architecture? There are two important elements here:

  1. Blocking search engines from your JavaScript (even accidentally).
  2. Proper internal linking, not leveraging JavaScript events as a replacement for HTML tags.

Why is blocking JavaScript such a big deal?

If search engines are blocked from crawling JavaScript, they will not be receiving your site’s full experience. This means search engines are not seeing what the end user is seeing. This can reduce your site’s appeal to search engines and could eventually be considered cloaking (if the intent is indeed malicious).

Fetch as Google and TechnicalSEO.com’s robots.txt and Fetch and Render testing tools can help to identify resources that Googlebot is blocked from.

The easiest way to solve this problem is through providing search engines access to the resources they need to understand your user experience.

!!! Important note: Work with your development team to determine which files should and should not be accessible to search engines.

Internal linking

Internal linking should be implemented with regular anchor tags within the HTML or the DOM (using an HTML tag) versus leveraging JavaScript functions to allow the user to traverse the site.

Essentially: Don’t use JavaScript’s onclick events as a replacement for internal linking. While end URLs might be found and crawled (through strings in JavaScript code or XML sitemaps), they won’t be associated with the global navigation of the site.

Internal linking is a strong signal to search engines regarding the site’s architecture and importance of pages. In fact, internal links are so strong that they can (in certain situations) override “SEO hints” such as canonical tags.

URL structure

Historically, JavaScript-based websites (aka “AJAX sites”) were using fragment identifiers (#) within URLs.

  • Not recommended:
    • The Lone Hash (#) – The lone pound symbol is not crawlable. It is used to identify anchor link (aka jump links). These are the links that allow one to jump to a piece of content on a page. Anything after the lone hash portion of the URL is never sent to the server and will cause the page to automatically scroll to the first element with a matching ID (or the first <a> element with a name of the following information). Google recommends avoiding the use of “#” in URLs.
    • Hashbang (#!) (and escaped_fragments URLs) – Hashbang URLs were a hack to support crawlers (Google wants to avoid now and only Bing supports). Many a moon ago, Google and Bing developed a complicated AJAX solution, whereby a pretty (#!) URL with the UX co-existed with an equivalent escaped_fragment HTML-based experience for bots. Google has since backtracked on this recommendation, preferring to receive the exact user experience. In escaped fragments, there are two experiences here:
      • Original Experience (aka Pretty URL): This URL must either have a #! (hashbang) within the URL to indicate that there is an escaped fragment or a meta element indicating that an escaped fragment exists (<meta name=”fragment” content=”!”>).
      • Escaped Fragment (aka Ugly URL, HTML snapshot): This URL replace the hashbang (#!) with “_escaped_fragment_” and serves the HTML snapshot. It is called the ugly URL because it’s long and looks like (and for all intents and purposes is) a hack.

Image result

Image source

  • Recommended:
    • pushState History API – PushState is navigation-based and part of the History API (think: your web browsing history). Essentially, pushState updates the URL in the address bar and only what needs to change on the page is updated. It allows JS sites to leverage “clean” URLs. PushState is currently supported by Google, when supporting browser navigation for client-side or hybrid rendering.
      • A good use of pushState is for infinite scroll (i.e., as the user hits new parts of the page the URL will update). Ideally, if the user refreshes the page, the experience will land them in the exact same spot. However, they do not need to refresh the page, as the content updates as they scroll down, while the URL is updated in the address bar.
      • Example: A good example of a search engine-friendly infinite scroll implementation, created by Google’s John Mueller (go figure), can be found here. He technically leverages the replaceState(), which doesn’t include the same back button functionality as pushState.
      • Read more: Mozilla PushState History API Documents

Obtainability

Search engines have been shown to employ headless browsing to render the DOM to gain a better understanding of the user’s experience and the content on page. That is to say, Google can process some JavaScript and uses the DOM (instead of the HTML document).

At the same time, there are situations where search engines struggle to comprehend JavaScript. Nobody wants a Hulu situation to happen to their site or a client’s site. It is crucial to understand how bots are interacting with your onsite content. When you aren’t sure, test.

Assuming we’re talking about a search engine bot that executes JavaScript, there are a few important elements for search engines to be able to obtain content:

  • If the user must interact for something to fire, search engines probably aren’t seeing it.
    • Google is a lazy user. It doesn’t click, it doesn’t scroll, and it doesn’t log in. If the full UX demands action from the user, special precautions should be taken to ensure that bots are receiving an equivalent experience.
  • If the JavaScript occurs after the JavaScript load event fires plus ~5-seconds*, search engines may not be seeing it.
    • *John Mueller mentioned that there is no specific timeout value; however, sites should aim to load within five seconds.
    • *Screaming Frog tests show a correlation to five seconds to render content.
    • *The load event plus five seconds is what Google’s PageSpeed Insights, Mobile Friendliness Tool, and Fetch as Google use; check out Max Prin’s test timer.
  • If there are errors within the JavaScript, both browsers and search engines won’t be able to go through and potentially miss sections of pages if the entire code is not executed.

How to make sure Google and other search engines can get your content

1. TEST

The most popular solution to resolving JavaScript is probably not resolving anything (grab a coffee and let Google work its algorithmic brilliance). Providing Google with the same experience as searchers is Google’s preferred scenario.

Google first announced being able to “better understand the web (i.e., JavaScript)” in May 2014. Industry experts suggested that Google could crawl JavaScript way before this announcement. The iPullRank team offered two great pieces on this in 2011: Googlebot is Chrome and How smart are Googlebots? (thank you, Josh and Mike). Adam Audette’s Google can crawl JavaScript and leverages the DOM in 2015 confirmed. Therefore, if you can see your content in the DOM, chances are your content is being parsed by Google.

adamaudette - I don't always JavaScript, but when I do, I know google can crawl the dom and dynamically generated HTML

Recently, Barry Goralewicz performed a cool experiment testing a combination of various JavaScript libraries and frameworks to determine how Google interacts with the pages (e.g., are they indexing URL/content? How does GSC interact? Etc.). It ultimately showed that Google is able to interact with many forms of JavaScript and highlighted certain frameworks as perhaps more challenging. John Mueller even started a JavaScript search group (from what I’ve read, it’s fairly therapeutic).

All of these studies are amazing and help SEOs understand when to be concerned and take a proactive role. However, before you determine that sitting back is the right solution for your site, I recommend being actively cautious by experimenting with small section Think: Jim Collin’s “bullets, then cannonballs” philosophy from his book Great by Choice:

“A bullet is an empirical test aimed at learning what works and meets three criteria: a bullet must be low-cost, low-risk, and low-distraction… 10Xers use bullets to empirically validate what will actually work. Based on that empirical validation, they then concentrate their resources to fire a cannonball, enabling large returns from concentrated bets.”

Consider testing and reviewing through the following:

  1. Confirm that your content is appearing within the DOM.
  2. Test a subset of pages to see if Google can index content.
  • Manually check quotes from your content.
  • Fetch with Google and see if content appears.
  • Fetch with Google supposedly occurs around the load event or before timeout. It’s a great test to check to see if Google will be able to see your content and whether or not you’re blocking JavaScript in your robots.txt. Although Fetch with Google is not foolproof, it’s a good starting point.
  • Note: If you aren’t verified in GSC, try Technicalseo.com’s Fetch and Render As Any Bot Tool.

After you’ve tested all this, what if something’s not working and search engines and bots are struggling to index and obtain your content? Perhaps you’re concerned about alternative search engines (DuckDuckGo, Facebook, LinkedIn, etc.), or maybe you’re leveraging meta information that needs to be parsed by other bots, such as Twitter summary cards or Facebook Open Graph tags. If any of this is identified in testing or presents itself as a concern, an HTML snapshot may be the only decision.

2. HTML SNAPSHOTS
What are HTmL snapshots?

HTML snapshots are a fully rendered page (as one might see in the DOM) that can be returned to search engine bots (think: a static HTML version of the DOM).

Google introduced HTML snapshots 2009, deprecated (but still supported) them in 2015, and awkwardly mentioned them as an element to “avoid” in late 2016. HTML snapshots are a contentious topic with Google. However, they’re important to understand, because in certain situations they’re necessary.

If search engines (or sites like Facebook) cannot grasp your JavaScript, it’s better to return an HTML snapshot than not to have your content indexed and understood at all. Ideally, your site would leverage some form of user-agent detection on the server side and return the HTML snapshot to the bot.

At the same time, one must recognize that Google wants the same experience as the user (i.e., only provide Google with an HTML snapshot if the tests are dire and the JavaScript search group cannot provide support for your situation).

Considerations

When considering HTML snapshots, you must consider that Google has deprecated this AJAX recommendation. Although Google technically still supports it, Google recommends avoiding it. Yes, Google changed its mind and now want to receive the same experience as the user. This direction makes sense, as it allows the bot to receive an experience more true to the user experience.

A second consideration factor relates to the risk of cloaking. If the HTML snapshots are found to not represent the experience on the page, it’s considered a cloaking risk. Straight from the source:

“The HTML snapshot must contain the same content as the end user would see in a browser. If this is not the case, it may be considered cloaking.”
Google Developer AJAX Crawling FAQs

Benefits

Despite the considerations, HTML snapshots have powerful advantages:

  1. Knowledge that search engines and crawlers will be able to understand the experience.
    • Certain types of JavaScript may be harder for Google to grasp (cough… Angular (also colloquially referred to as AngularJS 2) …cough).
  2. Other search engines and crawlers (think: Bing, Facebook) will be able to understand the experience.
    • Bing, among other search engines, has not stated that it can crawl and index JavaScript. HTML snapshots may be the only solution for a JavaScript-heavy site. As always, test to make sure that this is the case before diving in.

"It's not just Google understanding your JavaScript. It's also about the speed." -DOM - "It's not just about Google understanding your Javascript. it's also about your perceived latency." -DOM

Site latency

When browsers receive an HTML document and create the DOM (although there is some level of pre-scanning), most resources are loaded as they appear within the HTML document. This means that if you have a huge file toward the top of your HTML document, a browser will load that immense file first.

The concept of Google’s critical rendering path is to load what the user needs as soon as possible, which can be translated to → “get everything above-the-fold in front of the user, ASAP.”

Critical Rendering Path – Optimized Rendering Loads Progressively ASAP:

progressive page rendering

Image source

However, if you have unnecessary resources or JavaScript files clogging up the page’s ability to load, you get “render-blocking JavaScript.” Meaning: your JavaScript is blocking the page’s potential to appear as if it’s loading faster (also called: perceived latency).

Render-blocking JavaScript – Solutions

If you analyze your page speed results (through tools like Page Speed Insights Tool, WebPageTest.org, CatchPoint, etc.) and determine that there is a render-blocking JavaScript issue, here are three potential solutions:

  1. Inline: Add the JavaScript in the HTML document.
  2. Async: Make JavaScript asynchronous (i.e., add “async” attribute to HTML tag).
  3. Defer: By placing JavaScript lower within the HTML.

!!! Important note: It’s important to understand that scripts must be arranged in order of precedence. Scripts that are used to load the above-the-fold content must be prioritized and should not be deferred. Also, any script that references another file can only be used after the referenced file has loaded. Make sure to work closely with your development team to confirm that there are no interruptions to the user’s experience.

Read more: Google Developer’s Speed Documentation


TL;DR – Moral of the story

Crawlers and search engines will do their best to crawl, execute, and interpret your JavaScript, but it is not guaranteed. Make sure your content is crawlable, obtainable, and isn’t developing site latency obstructions. The key = every situation demands testing. Based on the results, evaluate potential solutions.

Thanks: Thank you Max Prin (@maxxeight) for reviewing this content piece and sharing your knowledge, insight, and wisdom. It wouldn’t be the same without you.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

User Behaviour Data as a Ranking Signal

Posted by Dan-Petrovic


Question: How does a search engine interpret user experience?
Answer: They collect and process user behaviour data.

Types of user behaviour data used by search engines include click-through rate (CTR), navigational paths, time, duration, frequency, and type of access.

Click-through rate

Click-through rate analysis is one of the most prominent search quality feedback signals in both commercial and academic information retrieval papers. Both Google and Microsoft have made considerable efforts towards development of mechanisms which help them understand when a page receives higher or lower CTR than expected.

Position bias

CTR values are heavily influenced by position because users are more likely to click on top results. This is called “position bias,” and it’s what makes it difficult to accept that CTR can be a useful ranking signal.
The good news is that search engines have numerous ways of dealing with the bias problem. In 2008, Microsoft found that the “cascade model” worked best in bias analysis. Despite slight degradation in confidence for lower-ranking results, it performed really well without any need for training data and it operated parameter-free. The significance of their model is in the fact that it offered a cheap and effective way to handle position bias, making CTR more practical to work with.

Result attractiveness

Good CTR is a relative term. A 30% CTR for a top result in Google wouldn’t be a surprise, unless it’s a branded term; then it would be a terrible CTR. Likewise, the same value for a competitive term would be extraordinarily high if nested between “high-gravity” search features (e.g. an answer box, knowledge panel, or local pack).

I’ve spent five years closely observing CTR data in the context of its dependence on position, snippet quality and special search features. During this time I’ve come to appreciate the value of knowing when deviation from the norm occurs. In addition to ranking position, consider other elements which may impact the user’s choice to click on a result:

  • Snippet quality
  • Perceived relevance
  • Presence of special search result features
  • Brand recognition
  • Personalisation

Practical application

Search result attractiveness is not an abstract academic problem. When done right, CTR studies can provide a lot of value to a modern marketer. Here’s a case study where I take advantage of CTR average deviations in my phrase research and page targeting process.

Google’s title bolding study

Google is also aware of additional factors that contribute to result attractiveness bias, and they’ve been busy working on non-position click bias solutions .

Google CTR study

They show strong interest in finding ways to improve the effectiveness of CTR-based ranking signals. In addition to solving position bias, Google’s engineers have gone one step further by investigating SERP snippet title bolding as a result attractiveness bias factor. I find it interesting that Google recently removed bolding in titles for live search results, likely to eliminate the bias altogether. Their paper highlights the value in further research focused on the bias impact of specific SERP snippet features.

URL access, duration, frequency, and trajectory

Logged click data is not the only useful user behaviour signal. Session duration, for example, is a high-value metric if measured correctly. For example, a user could navigate to a page and leave it idle while they go out for lunch. This is where active user monitoring systems become useful.

There are many assisting user-behaviour signals which, while not indexable, aid measurement of engagement time on pages. This includes various types of interaction via keyboard, mouse, touchpad, tablet, pen, touch screen, and other interfaces.

Google’s John Mueller recently explained that user engagement is not a direct ranking signal, and I believe this. Kind of. John said that this type of data (time on page, filling out forms, clicking, etc) doesn’t do anything automatically.

At this point in time, we’re likely looking at a sandbox model rather than a live listening and reaction system when it comes to the direct influence of user behaviour on a specific page. That said, Google does acknowledge limitations of quality-rater and sandbox-based result evaluation. They’ve recently proposed an active learning system, which would evaluate results on the fly with a more representative sample of their user base.

“Another direction for future work is to incorporate active learning in order to gather a more representative sample of user preferences.”

Google’s result attractiveness paper was published in 2010. In early 2011, Google released the Panda algorithm. Later that year, Panda went into flux, indicating an implementation of one form of an active learning system. We can expect more of Google’s systems to run on their own in the future.

The monitoring engine

Google has designed and patented a system in charge of collecting and processing of user behaviour data. They call it “the monitoring engine”, but I don’t like that name—it’s too long. Maybe they should call it, oh, I don’t know… Chrome?

The actual patent describing Google’s monitoring engine is a truly dreadful read, so if you’re in a rush, you can read my highlights instead.

MetricsService

Let’s step away from patents for a minute and observe what’s already out there. Chrome’s MetricsService is a system in charge of the acquisition and transmission of user log data. Transmitted histograms contain very detailed records of user activities, including opened/closed tabs, fetched URLs, maximized windows, et cetera.

Enter this in Chrome: chrome://histograms/
(Click here for technical details)

Here are a few external links with detailed information about Chrome’s MetricsService, reasons and types of data collection, and a full list of histograms.

Use in rankings

Google can process duration data in an eigenvector-like fashion using nodes (URLs), edges (links), and labels (user behaviour data). Page engagement signals, such as session duration value, are used to calculate weights of nodes. Here are the two modes of a simplified graph comprised of three nodes (A, B, C) with time labels attached to each:

nodes

In an undirected graph model (undirected edges), the weight of the node A is directly driven by the label value (120 second active session). In a directed graph (directed edges), node A links to node B and C. By doing so, it receives a time-label credit from the nodes it links to.

In plain English, if you link to pages that people spend a lot of time on, Google will add a portion of that “time credit” towards the linking page. This is why linking out to useful, engaging content is a good idea. A “client behavior score” reflects the relative frequency and type of interactions by the user.

What’s interesting is that the implicit quality signals of deeper pages also flow up to higher-level pages.

Reasonable surfer model

“Reasonable surfer” is the random surfer’s successor. The PageRank dampening factor reflects the original assumption that after each followed link, our imaginary surfer is less likely to click on another random link, resulting in an eventual abandonment of the surfing path. Most search engines today work with a more refined model encompassing a wider variety of influencing factors.

For example, the likelihood of a link being clicked on within a page may depend on:

  • Position of the link on the page (top, bottom, above/below fold)
  • Location of the link on the page (menu, sidebar, footer, content area, list)
  • Size of anchor text
  • Font size, style, and colour
  • Topical cluster match
  • URL characteristics (external/internal, hyphenation, TLD, length, redirect, host)
  • Image link, size, and aspect ratio
  • Number of links on page
  • Words around the link, in title, or headings
  • Commerciality of anchor text

In addition to perceived importance from on-page signals, a search engine may judge link popularity by observing common user choices. A link on which users click more within a page can carry more weight than the one with less clicks. Google in particular mentions user click behaviour monitoring in the context of balancing out traditional, more manipulative signals (e.g. links).

In the following illustration, we can see two outbound links on the same document (A) pointing to two other documents: (B) and (C). On the left is what would happen in the traditional “random surfer model,” while on the right we have a link which sits on a more prominent location and tends to be a preferred choice by many of the pages’ visitors.

link nodes

This method can be used on a single document or in a wider scope, and is also applicable to both single users (personalisation) and groups (classes) of users determined by language, browsing history, or interests.

Pogo-sticking

One of the most telling signals for a search engine is when users perform a query and quickly bounce back to search results after visiting a page that didn’t satisfy their needs. The effect was described and discussed a long time ago, and numerous experiments show its effect in action. That said, many question the validity of SEO experiments largely due to their rather non-scientific execution and general data noise. So, it’s nice to know that the effect has been on Google’s radar.

Address bar

URL data can include whether a user types a URL into an address field of a web browser, or whether a user accesses a URL by clicking on a hyperlink to another web page or a hyperlink in an email message. So, for example, if users type in the exact URL and hit enter to reach a page, that represents a stronger signal than when visiting the same page after a browser autofill/suggest or clicking on a link.

  • Typing in full URL (full significance)
  • Typing in partial URL with auto-fill completion (medium significance)
  • Following a hyperlink (low significance)

Login pages

Google monitors users and maps their journey as they browse the web. They know when users log into something (e.g. social network) and they know when they end the session by logging out. If a common journey path always starts with a login page, Google will add more significance to the login page in their rankings.

"A login page can start a user on a trajectory, or sequence, of associated pages and may be more significant to the user than the associated pages and, therefore, merit a higher ranking score."

I find this very interesting. In fact, as I write this, we’re setting up a login experiment to see if repeated client access and page engagement impacts the search visibility of the page in any way. Readers of this article can access the login test page with username: moz and password: moz123.

The idea behind my experiment is to have all the signals mentioned in this article ticked off:

  • URL familiarity, direct entry for maximum credit
  • Triggering frequent and repeated access by our clients
  • Expected session length of 30-120 seconds
  • Session length credit up-flow to home page
  • Interactive elements add to engagement (export, chart interaction, filters)

Combining implicit and traditional ranking signals

Google treats various user-generated data with different degrees of importance. Combining implicit signals such as day of the week, active session duration, visit frequency, or type of article with traditional ranking methods improves reliability of search results.

page quality metrics

Impact on SEO

The fact that behaviour signals are on Google’s radar stresses the rising importance of user experience optimisation. Our job is to incentivise users to click, engage, convert, and keep coming back. This complex task requires a multidisciplinary mix, including technical, strategic, and creative skills. We’re being evaluated by both users and search engines, and everything users do on our pages counts. The evaluation starts at the SERP level and follows users during the whole journey throughout your site.

“Good user experience”

Search visibility will never depend on subjective user experience, but on search engines’ interpretation of it. Our most recent research into how people read online shows that users don’t react well when facing large quantities of text (this article included) and will often skim content and leave if they can’t find answers quickly enough. This type of behaviour may send the wrong signals about your page.

My solution was to present all users with a skeletal content form with supplementary content available on-demand through use of hypotext. As a result, our test page (~5000 words) increased the average time per user from 6 to 12 minutes and bounce rate reduced from 90% to 60%. The very article where we published our findings shows clicks, hovers, and scroll depth activity of double or triple values to the rest of our content. To me, this was convincing enough.

clicks

Google’s algorithms disagreed, however, devaluing the content not visible on the page by default. Queries contained within unexpanded parts of the page aren’t bolded in SERP snippets and currently don’t rank as well as pages which copied that same content but made it visible. This is ultimately something Google has to work on, but in the meantime we have to be mindful of this perception gap and make calculated decisions in cases where good user experience doesn’t match Google’s best practices.

Relevant papers

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Etsy Boosts Search For Better Content Discovery, User Engagement

Etsy has more than 30 million items for sale from more than one million sellers globally. There are no SKUs and most of the data is unstructured, creating a messy and massive discovery challenge for both Etsy and its users. Accordingly the company is today rolling out more sophisticated search…



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Find More Articles

Posted in Latest NewsComments Off

SearchCap: Google On Link Building, Mobile User Interface Tests & Local SEO

Below is what happened in search today, as reported on Search Engine Land and from other places across the web.

The post SearchCap: Google On Link Building, Mobile User Interface Tests & Local SEO appeared first on Search Engine Land.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

A Search Marketer’s Guide To Becoming An Excel Power User

Start here to fine-tune your marketing analytics skills using Microsoft Excel and similar functions in Google Spreadsheets to build custom reports and executive marketing dashboards. Plus, find tips to integrate your data from Google Analytics, Google Webmaster Tools, related APIs and other search…



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

More Articles

Posted in Latest NewsComments Off

The Future of User Behavior – Whiteboard Friday

Posted by willcritchlow

In the early days of search, Google used only your typed query to find the most relevant results. We’re now increasingly seeing SERPs that are influenced by all kinds of contextual information — the implicit queries.

In today’s Whiteboard Friday, Will Critchlow covers what exactly that means and how it might explain why we see “(not provided)” in our analytics more often than we’d like.









PRO Tip: Learn more about how Google ranks pages at Moz Academy.

For reference, here’s a still image of this week’s whiteboard:

Video Transcription

Hi, Moz fans. I’m Will Critchlow, one of the founders of Distilled, and I want to talk today about the future of user behavior, something that I’ve been talking about a MozCon this year. In particular, I want to talk about the implications of query enhancement. So I’m going to start by telling you what we mean by this phrase.

Old-school query, key phrase, this is what we’ve talked about for a long time. In SEO, something like “London tube stations,” a bunch of words strung together, that’s the entire query, and we would call it a query or a key phrase. But we’ve been defining this what we call the “new query” made up of two parts. The explicit query here in blue is London tube stations, again, in this example, exactly the same. What we’re calling the “implicit query” is essentially all of the other information that the search engine knows about you, and this what they know about you in general, what they know about you at this specific moment in time, and what they know about your recent history and any other factors they want to factor in.

So, in this particular case, I’ve said this is an iPhone user, they’re on the street, they’re in London. You can imagine how this information changes the kind of thing that you might be looking for when you perform a query like this or indeed any other.

This whole model is something that we’ve been kind of building out and thinking about a lot this year. Tom Anthony, one of my colleagues in London, presented this at a conference, and we’ve been working on it together. We came up with this kind of visual representation of what we think is happening over time. As people get used to this behavior, they see it in the search results, and they adapt to the information that they’re receiving back from the search engine.

So old school search results where everybody’s search result was exactly the same, if they performed a particular query, no matter where in the world they were, wherever in the country they were, whatever device they were on, whatever time of day it was, whatever their recent history, everybody’s was the same. In other words, the only information that the search engine is taking into account in this case is the old-style query, the explicit part.

Then, what we’ve seen is that there’s gradually been this implicit query information being added on top. You may not be able to see it from my brilliant hand-drawn diagram here, but my intention is that these blue bars are the same height out to here. So, at this point, there’s all of the explicit query information being passed over. In other words, I’m doing the same kind of search I’ve always done. But Google is taking into account this extra, implicit information about me, what it knows about me, what it knows about my device, what it knows about my history and so forth. Therefore, Google has more information here than they did previously. They can return better results.

That’s kind of what we’ve been talking about for a long time, I think, this evolution of better search results based on the additional information that the search engines have about us. But what we’re starting to see and what we’re certainly predicting is going to become more and more prevalent is that as the implicit information that search engines have grows, and, in particular, as their ability to use that information intelligently improves, then we’re actually going to see users start to give less explicit information over. In other words, they’re going to trust that the search engines are going to pull out the implicit information that they need. So I can do a much shorter, simpler query.

But what you see here is, again, to explain my hand-drawn diagram in case it’s not perfectly beautiful, the blue bars are declining here. In other words, I’m sending less and less explicit information over as time goes along. But actually, the total information that search engines have to work with, as time goes on, is actually increasing, because the implicit information they’re gathering is growing faster than the explicit information is declining.

I can give you a concrete example of this. So I vividly remember giving a talk about keyword research, and it was a few years ago. I was kind of mocking that business owner. We’ve all met these business owners who want to rank for the one-word key phrase. So I want to rank for restaurant or whatever. I say, “This is ridiculous. What in the world can you imagine somebody is possibly looking for when they do a search of ‘restaurant.’ ”

Back then, if you did a search like that, you got a kind of weird mix, because this is back in these days when there essentially no implicit information being taken in. You’ve got a mix of the most powerful websites of actual restaurants anywhere in your country plus some news, like a powerful page on a big domain, those kinds of things. Probably a Wikipedia entry. Why would a business owner want to rank for that stuff? That’s going to convert horribly poorly.

But my mind was changed powerfully when I caught myself. I was in Boston, and I caught myself doing a search for “breakfast.” I went to Google, typed in “breakfast,” hit Search. What was I thinking? What exactly was I hoping the outcome was going to be here? Well, actually, I’ve trained myself to believe that all of this other implicit information is going to be taken into account, and, in fact, it was. So, instead of getting that old-style Wikipedia entry, a news result, a couple of random restaurants from somewhere in the country, I got a local pack, and I got some local Boston news articles on the top 10 places to have breakfast in Boston. It was all customized to my exact location, so I got some stuff that was really near me, and I found a great place to have breakfast just around the corner from the hotel. So that worked.

I’ve actually noticed myself doing this more and more, and I imagine, given obviously the industry I work in, I’m pretty much an early adopter here. But I think we’re going to see all users adopt this style of searching more and more, and it’s really going to change how we as marketers have to think, because it doesn’t mean that you need to go out there and rank for the generic keyword “breakfast.” But it does mean that you need to take into account all of the possible ways that people might be searching for these things and the various different ways that Google might piece together a useful search result when somebody gives them such apparently unhelpful explicit information, in particular, obviously, in this case, local.

I kind of mentioned “not provided” down here. This is my one, I guess, non-
conspiracy theory view of what could be going on with the whole not provided thing, which is that actually, if Google’s model is looking more and more like this and less like this, and, in particular, as we get further over to this end, and of course, you can consider something like Google Now would be the extreme of this where is in fact no blue bar and pure orange, then actually the reliance on keywords goes away. Maybe the not provided thing is actually more of a strategic message for Google, kind of saying, “We’re not necessarily thinking in terms of keywords anymore. We’re thinking in terms of your need at a given moment in time.”

So, anyway, I hope that’s been a useful kind of rapid-fire run through over what I think is going to happen as people get used to the power of query enhancement. I’m Will Critchlow. Until next time, thanks.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Find More Articles

Posted in Latest NewsComments Off

Getting Granular With User Generated Content

The stock market had a flash crash today after someone hacked the AP account & made a fake announcement about bombs going off at the White House. Recently Twitter’s search functionality has grown so inundated with spam that I don’t even look at the brand related searches much anymore. While you can block individual users, it doesn’t block them from showing up in search results, so there are various affiliate bots that spam just about any semi-branded search.

Of course, for as spammy as the service is now, it was worse during the explosive growth period, when Twitter had fewer than 10 employees fighting spam:

Twitter says its “spammy” tweet rate of 1.5% in 2010 was down from 11% in 2009.

If you want to show growth by any means necessary, engagement by a spam bot is still engagement & still lifts the valuation of the company.

Many of the social sites make no effort to police spam & only combat it after users flag it. Consider Eric Schmidt’s interview with Julian Assange, where Eric Schmidt stated:

  • “We [YouTube] can’t review every submission, so basically the crowd marks it if it is a problem post publication.”
  • “You have a different model, right. You require human editors.” on Wikileaks vs YouTube

We would post editorial content more often, but we are sort of debating opening up a social platform so that we can focus on the user without having to bear any editorial costs until after the fact. Profit margins are apparently better that way.

As Google drives smaller sites out of the index & ranks junk content based on no factor other than it being on a trusted site, they create the incentive for spammers to ride on the social platforms.

All aboard. And try not to step on any toes!

When I do some product related searches (eg: brand name & shoe model) almost the whole result set for the first 5 or 10 pages is garbage.

  • Blogspot.com subdomains
  • Appspot.com subdomains
  • YouTube accounts
  • Google+ accounts
  • sites.google.com
  • WordPress.com subdomains
  • Facebook Notes & pages
  • Tweets
  • Slideshare
  • LinkedIn
  • blog.yahoo.com
  • subdomains off of various other free hosts

It comes without surprise that Eric Schmidt fundamentally believes that “disinformation becomes so easy to generate because of, because complexity overwhelms knowledge, that it is in the people’s interest, if you will over the next decade, to build disinformation generating systems, this is true for corporations, for marketing, for governments and so on.”

Of course he made no mention in Google’s role in the above problem. When they are not issuing threats & penalties to smaller independent webmasters, they are just a passive omniscient observer.

With all these business models, there is a core model of building up a solid stream of usage data & then tricking users or looking the other way when things get out of hand. Consider Google’s Lane Shackleton’s tips on YouTube:

  • “Search is a way for a user to explicitly call out the content that they want. If a friend told me about an Audi ad, then I might go seek that out through search. It’s a strong signal of intent, and it’s a strong signal that someone found out about that content in some way.”
  • “you blur the lines between advertising and content. That’s really what we’ve been advocating our advertisers to do.”
  • “you’re making thoughtful content for a purpose. So if you want something to get shared a lot, you may skew towards doing something like a prank”

Harlem Shake & Idiocracy: the innovative way forward to improve humanity.

Life is a prank.

This “spam is fine, so long as it is user generated” stuff has gotten so out of hand that Google is now implementing granular page-level penalties. When those granular penalties hit major sites Google suggests that those sites may receive clear advice on what to fix, just by contacting Google:

Hubert said that if people file a reconsideration request, they should “get a clear answer” about what’s wrong. There’s a bit of a Catch-22 there. How can you file a reconsideration request showing you’ve removed the bad stuff, if the only way you can get a clear answer about the bad stuff to remove is to file a reconsideration request?

The answer is that technically, you can request reconsideration without removing anything. The form doesn’t actually require you to remove bad stuff. That’s just the general advice you’ll often hear Google say, when it comes to making such a request. That’s also good advice if you do know what’s wrong.

But if you’re confused and need more advice, you can file the form asking for specifics about what needs to be removed. Then have patience

In the past I referenced that there is no difference between a formal white list & overly-aggressive penalties coupled with loose exemptions for select parties.

The moral of the story is that if you are going to spam, you should make it look like a user of your site did it, that way you

  • are above judgement
  • receive only a limited granular penalty
  • get explicit & direct feedback on what to fix
Categories: 

SEO Book

Posted in Latest NewsComments Off

Advert