Tag Archive | "Need"

The Minimum Viable Knowledge You Need to Work with JavaScript & SEO Today

Posted by sergeystefoglo

If your work involves SEO at some level, you’ve most likely been hearing more and more about JavaScript and the implications it has on crawling and indexing. Frankly, Googlebot struggles with it, and many websites utilize modern-day JavaScript to load in crucial content today. Because of this, we need to be equipped to discuss this topic when it comes up in order to be effective.

The goal of this post is to equip you with the minimum viable knowledge required to do so. This post won’t go into the nitty gritty details, describe the history, or give you extreme detail on specifics. There are a lot of incredible write-ups that already do this — I suggest giving them a read if you are interested in diving deeper (I’ll link out to my favorites at the bottom).

In order to be effective consultants when it comes to the topic of JavaScript and SEO, we need to be able to answer three questions:

  1. Does the domain/page in question rely on client-side JavaScript to load/change on-page content or links?
  2. If yes, is Googlebot seeing the content that’s loaded in via JavaScript properly?
  3. If not, what is the ideal solution?

With some quick searching, I was able to find three examples of landing pages that utilize JavaScript to load in crucial content.

I’m going to be using Sitecore’s Symposium landing page through each of these talking points to illustrate how to answer the questions above.

We’ll cover the “how do I do this” aspect first, and at the end I’ll expand on a few core concepts and link to further resources.

Question 1: Does the domain in question rely on client-side JavaScript to load/change on-page content or links?

The first step to diagnosing any issues involving JavaScript is to check if the domain uses it to load in crucial content that could impact SEO (on-page content or links). Ideally this will happen anytime you get a new client (during the initial technical audit), or whenever your client redesigns/launches new features of the site.

How do we go about doing this?

Ask the client

Ask, and you shall receive! Seriously though, one of the quickest/easiest things you can do as a consultant is contact your POC (or developers on the account) and ask them. After all, these are the people who work on the website day-in and day-out!

“Hi [client], we’re currently doing a technical sweep on the site. One thing we check is if any crucial content (links, on-page content) gets loaded in via JavaScript. We will do some manual testing, but an easy way to confirm this is to ask! Could you (or the team) answer the following, please?

1. Are we using client-side JavaScript to load in important content?

2. If yes, can we get a bulleted list of where/what content is loaded in via JavaScript?”

Check manually

Even on a large e-commerce website with millions of pages, there are usually only a handful of important page templates. In my experience, it should only take an hour max to check manually. I use the Chrome Web Developers plugin, disable JavaScript from there, and manually check the important templates of the site (homepage, category page, product page, blog post, etc.)

In the example above, once we turn off JavaScript and reload the page, we can see that we are looking at a blank page.

As you make progress, jot down notes about content that isn’t being loaded in, is being loaded in wrong, or any internal linking that isn’t working properly.

At the end of this step we should know if the domain in question relies on JavaScript to load/change on-page content or links. If the answer is yes, we should also know where this happens (homepage, category pages, specific modules, etc.)

Crawl

You could also crawl the site (with a tool like Screaming Frog or Sitebulb) with JavaScript rendering turned off, and then run the same crawl with JavaScript turned on, and compare the differences with internal links and on-page elements.

For example, it could be that when you crawl the site with JavaScript rendering turned off, the title tags don’t appear. In my mind this would trigger an action to crawl the site with JavaScript rendering turned on to see if the title tags do appear (as well as checking manually).

Example

For our example, I went ahead and did a manual check. As we can see from the screenshot below, when we disable JavaScript, the content does not load.

In other words, the answer to our first question for this pages is “yes, JavaScript is being used to load in crucial parts of the site.”

Question 2: If yes, is Googlebot seeing the content that’s loaded in via JavaScript properly?

If your client is relying on JavaScript on certain parts of their website (in our example they are), it is our job to try and replicate how Google is actually seeing the page(s). We want to answer the question, “Is Google seeing the page/site the way we want it to?”

In order to get a more accurate depiction of what Googlebot is seeing, we need to attempt to mimic how it crawls the page.

How do we do that?

Use Google’s new mobile-friendly testing tool

At the moment, the quickest and most accurate way to try and replicate what Googlebot is seeing on a site is by using Google’s new mobile friendliness tool. My colleague Dom recently wrote an in-depth post comparing Search Console Fetch and Render, Googlebot, and the mobile friendliness tool. His findings were that most of the time, Googlebot and the mobile friendliness tool resulted in the same output.

In Google’s mobile friendliness tool, simply input your URL, hit “run test,” and then once the test is complete, click on “source code” on the right side of the window. You can take that code and search for any on-page content (title tags, canonicals, etc.) or links. If they appear here, Google is most likely seeing the content.

Search for visible content in Google

It’s always good to sense-check. Another quick way to check if GoogleBot has indexed content on your page is by simply selecting visible text on your page, and doing a site:search for it in Google with quotations around said text.

In our example there is visible text on the page that reads…

“Whether you are in marketing, business development, or IT, you feel a sense of urgency. Or maybe opportunity?”

When we do a site:search for this exact phrase, for this exact page, we get nothing. This means Google hasn’t indexed the content.

Crawling with a tool

Most crawling tools have the functionality to crawl JavaScript now. For example, in Screaming Frog you can head to configuration > spider > rendering > then select “JavaScript” from the dropdown and hit save. DeepCrawl and SiteBulb both have this feature as well.

From here you can input your domain/URL and see the rendered page/code once your tool of choice has completed the crawl.

Example:

When attempting to answer this question, my preference is to start by inputting the domain into Google’s mobile friendliness tool, copy the source code, and searching for important on-page elements (think title tag, <h1>, body copy, etc.) It’s also helpful to use a tool like diff checker to compare the rendered HTML with the original HTML (Screaming Frog also has a function where you can do this side by side).

For our example, here is what the output of the mobile friendliness tool shows us.

After a few searches, it becomes clear that important on-page elements are missing here.

We also did the second test and confirmed that Google hasn’t indexed the body content found on this page.

The implication at this point is that Googlebot is not seeing our content the way we want it to, which is a problem.

Let’s jump ahead and see what we can recommend the client.

Question 3: If we’re confident Googlebot isn’t seeing our content properly, what should we recommend?

Now we know that the domain is using JavaScript to load in crucial content and we know that Googlebot is most likely not seeing that content, the final step is to recommend an ideal solution to the client. Key word: recommend, not implement. It’s 100% our job to flag the issue to our client, explain why it’s important (as well as the possible implications), and highlight an ideal solution. It is 100% not our job to try to do the developer’s job of figuring out an ideal solution with their unique stack/resources/etc.

How do we do that?

You want server-side rendering

The main reason why Google is having trouble seeing Sitecore’s landing page right now, is because Sitecore’s landing page is asking the user (us, Googlebot) to do the heavy work of loading the JavaScript on their page. In other words, they’re using client-side JavaScript.

Googlebot is literally landing on the page, trying to execute JavaScript as best as possible, and then needing to leave before it has a chance to see any content.

The fix here is to instead have Sitecore’s landing page load on their server. In other words, we want to take the heavy lifting off of Googlebot, and put it on Sitecore’s servers. This will ensure that when Googlebot comes to the page, it doesn’t have to do any heavy lifting and instead can crawl the rendered HTML.

In this scenario, Googlebot lands on the page and already sees the HTML (and all the content).

There are more specific options (like isomorphic setups)

This is where it gets to be a bit in the weeds, but there are hybrid solutions. The best one at the moment is called isomorphic.

In this model, we’re asking the client to load the first request on their server, and then any future requests are made client-side.

So Googlebot comes to the page, the client’s server has already executed the initial JavaScript needed for the page, sends the rendered HTML down to the browser, and anything after that is done on the client-side.

If you’re looking to recommend this as a solution, please read this post from the AirBNB team which covers isomorphic setups in detail.

AJAX crawling = no go

I won’t go into details on this, but just know that Google’s previous AJAX crawling solution for JavaScript has since been discontinued and will eventually not work. We shouldn’t be recommending this method.

(However, I am interested to hear any case studies from anyone who has implemented this solution recently. How has Google responded? Also, here’s a great write-up on this from my colleague Rob.)

Summary

At the risk of severely oversimplifying, here’s what you need to do in order to start working with JavaScript and SEO in 2018:

  1. Know when/where your client’s domain uses client-side JavaScript to load in on-page content or links.
    1. Ask the developers.
    2. Turn off JavaScript and do some manual testing by page template.
    3. Crawl using a JavaScript crawler.
  2. Check to see if GoogleBot is seeing content the way we intend it to.
    1. Google’s mobile friendliness checker.
    2. Doing a site:search for visible content on the page.
    3. Crawl using a JavaScript crawler.
  3. Give an ideal recommendation to client.
    1. Server-side rendering.
    2. Hybrid solutions (isomorphic).
    3. Not AJAX crawling.

Further resources

I’m really interested to hear about any of your experiences with JavaScript and SEO. What are some examples of things that have worked well for you? What about things that haven’t worked so well? If you’ve implemented an isomorphic setup, I’m curious to hear how that’s impacted how Googlebot sees your site.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Want to target position 0? Here’s what you need to make that happen

Hey Google, how do you become the answer people hear on their voice assistants? Contributor Karen Bone explains how to make that happen by doing your homework on featured snippets.

The post Want to target position 0? Here’s what you need to make that happen appeared first on Search Engine…



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

Do Content Writers Really Need to Think about SEO?

In my experience, creative writing pros have an endless appetite for writing advice. How to add more color and texture to your writing, storytelling techniques, endless discussions about the serial comma and finer points of usage. Elements like copywriting and conversion strategy? That tends to start to divide people up. Some writers want to pick
Read More…

The post Do Content Writers Really Need to Think about SEO? appeared first on Copyblogger.


Copyblogger

Posted in Latest NewsComments Off

You Need Both of These Skill Sets to Keep Your Audience Coming Back for More

When I’m not performing my typical duties as Rainmaker Digital’s Marketing Technologist, I’m cooking up a storm in my kitchen. Amidst the rhythmic chopping of fresh produce, the clashing of pots and pans, and the roar of boiling water, I realized that my two roles have a lot in common. They both require a balance
Read More…

The post You Need Both of These Skill Sets to Keep Your Audience Coming Back for More appeared first on Copyblogger.


Copyblogger

Posted in Latest NewsComments Off

Three Killer Skills Professional Writers Need to Succeed in 2018

What brought you here today? What are you hoping to learn, be, become, do, or change by reading Copyblogger? We’ll be asking that question a lot in the coming year, but while we wait (feel free to answer in the comments below — we’d love to hear it), allow us to talk about why we
Read More…

The post Three Killer Skills Professional Writers Need to Succeed in 2018 appeared first on Copyblogger.


Copyblogger

Posted in Latest NewsComments Off

State of Enterprise SEO 2017: Overworked SEOs Need Direction

Posted by NorthStarInbound

This survey and its analysis was co-authored with North Star Inbound’s senior creative strategist, Andrea Pretorian.

In the spring of 2017, North Star Inbound partnered up with seoClarity and BuzzStream to survey the state of enterprise SEO. We had a fair share of anecdotal evidence from our clients, but we wanted a more objective measurement of how SEO teams are assembled, what resources are allocated to them, what methods they use, and how they perform.

We hadn’t seen such data collected, particularly for enterprise SEO. We found this surprising given its significance, evident even in the number of “enterprise SEO tools” and solutions being marketed.

What is enterprise SEO?

There is no single fixed-industry definition of “enterprise” beyond “large business.” For the purposes of this survey, we defined enterprise businesses as being comprised of 500 or more employees. “Small enterprise” means 500–1000 employees, while “large enterprise” means over 1000 employees.

Industry discussion often points to the number of pages as being a potential defining factor for enterprise SEO, but even that is not necessarily a reliable measure.

What was our survey methodology?

We developed the widest enterprise SEO survey to date, made up of 29 questions that delved into every aspect of the enterprise SEO practice. From tools and tactics to content development, keyword strategy, and more, we left no stone unturned. We then picked the brains of 240 SEO specialists across the country. You can check out our complete survey, methodology, and results here.

Team size matters — or does it?

Let’s start by looking at enterprise team size and the resources allocated to them. We focused on companies with an in-house SEO team, and broke them down in terms of small (500–1000 employees) and large enterprise (>1000 employees).

We found that 76% of small enterprise companies have in-house SEO teams of 5 people or less, but were surprised that 68% of large enterprise companies also had teams of this size. We expected a more pronounced shift into larger team sizes paralleling the larger size of their parent company; we did not expect to see roughly the same team size across small and large enterprise companies.

Chart_Q4_170522.png

Interestingly, in larger companies we also see less confidence in the team’s experience in SEO. Of the companies with in-house SEO, only 31.67% of large enterprise teams called themselves “leaders” in the SEO space, which was defined in this survey as part of a team engaged broadly and critically within the business. 40% of small enterprise teams called themselves “leaders.” In terms of viewing themselves more positively (leaders, visionaries) or less (SEO pioneers in their company or else new SEO teams), we did not notice a big difference between small or large enterprise in-house SEO teams.

Large enterprise companies should have more resources at their disposal — HR teams to hire the best talent, reliable onboarding practices in place, access to more sophisticated project management tools, and more experience managing teams — which makes these results surprising. Why are large enterprise companies not more confident about their SEO skills and experience?

Before going too far in making assumptions about their increased resources, we made sure to ask our survey-takers about this. Specifically, we asked for how much budget is allocated to SEO activity per month — not including the cost of employees’ salaries, or the overhead costs of keeping the lights on — since this would result in a figure easier to report consistently across all survey takers.

It turns out that 57% of large enterprise companies had over $ 10K dedicated strictly to SEO activity each month, in contrast to just 24% of small enterprise companies allocating this much budget. 40% of large enterprise had over $ 20K dedicated to SEO activity each month, suggesting that SEO is a huge priority for them. And yet, as we saw earlier, they are not sold on their team having reached leader status.

Enterprise SEO managers in large companies value being scalable and repeatable

We asked survey takers to rate the success of their current SEO strategy, per the scale mapped below, and here are the results:

Chart_Q8_170522.png

A smaller percentage of large enterprise SEOs had a clearly positive rating of the current success of their SEO strategy than did small enterprise SEOs. We even see more large enterprise SEOs “on the fence” about their strategy’s performance as opposed to small. This suggests that, from the enterprise SEOs we surveyed, the ones who work for smaller companies tend to be slightly more optimistic about their campaigns’ performance than the larger ones.

What’s notable about the responses to this question is that 18.33% of managers at large enterprise companies would rate themselves as successful — calling themselves “scalable and repeatable.” No one at a small enterprise selected this to describe their strategy. We clearly tapped into an important value for these teams, who use it enough to measure their performance that it’s a value they can report on to others as a benchmark of their success.

Anyone seeking to work with large enterprise clients needs to make sure their processes are scalable and repeatable. This also suggests that one way for a growing company to step up its SEO team’s game as it grows is by achieving these results. This would be a good topic for us to address in greater detail in articles, webinars, and other industry communication.

Agencies know best? (Agencies think they know best.)

Regardless of the resources available to them, across the board we see that in-house SEOs do not show as much confidence as agencies. Agencies are far more likely to rate their SEO strategy as successful: 43% of survey takers who worked for agencies rated their strategy as outright successful, as opposed to only 13% of in-house SEOs. That’s huge!

While nobody said their strategy was a total disaster — we clearly keep awesome company — 7% of in-house SEOs expressed frustration with their strategy, as opposed to only 1% of agencies.

Putting our bias as a link building agency aside, we would expect in-house SEO enterprise teams to work like in-house agencies. With the ability to hire top talent and purchase enterprise software solutions to automate and track campaigns, we expect them to have the appropriate tools and resources at their disposal to generate the same results and confidence as any agency.

So why the discrepancy? It’s hard to say for sure. One theory might be that those scalable, repeatable results we found earlier that serve as benchmarks for enterprise are difficult to attain, but the way agencies evolve might serve them better. Agencies tend to develop somewhat organically — expanding their processes over time and focusing on SEO from day one — as opposed to an in-house team in a company, which rarely was there from day one and, more often than not, sprouted up when the company’s growth made it such that marketing became a priority.

One clue for answering this question might come from examining the differences between how agencies and in-house SEO teams responded to the question asking them what they find to be the top two most difficult SEO obstacles they are currently facing.

Agencies have direction, need budget; in-house teams have budget, need direction

If we look at the top three obstacles faced by agencies and in-house teams, both of them place finding SEO talent up there. Both groups also say that demonstrating ROI is an issue, although it’s more of an obstacle for agencies rather than in-house SEO teams.

When we look at the third obstacles, we find the biggest reveal. While agencies find themselves hindered by trying to secure enough budget, in-house SEO teams struggle to develop the right content; this seems in line with the point we made in the previous section comparing agency versus in-house success. Agencies have the processes down, but need to work hard to fit their clients’ budgets. In-house teams have the budget they need, but have trouble lining them up to the exact processes their company needs to grow as desired. The fact that almost half of the in-house SEOs would rank developing the right content as their biggest obstacle — as opposed to just over a quarter of agencies — further supports this, particularly given how important content is to any marketing campaign.

Now, let’s take a step back and dig deeper into that second obstacle we noted: demonstrating ROI.

Everyone seems to be measuring success differently

One question that we asked of survey takers was about the top two technical SEO issues they monitor:

The spread across the different factors were roughly the same across the two different groups. The most notable difference between the two groups was that even more in-house SEO teams looked at page speed, although this was the top factor for both groups. Indexation was the second biggest factor for both groups, followed by duplicate content. There seems to be some general consensus about monitoring technical SEO issues.

But when we asked everyone what their top two factors are when reviewing their rankings, we got these results:

For both agencies and in-house SEO teams, national-level keywords were the top factor, although this was true for almost-three quarters of in-house SEOs and about half of agencies. Interestingly, agencies focused a bit more on geo/local keywords as well as mobile. From when we first opened this data we found this striking, because it suggests a narrative where in-house SEO teams focus on more conservative, “seasoned” methods, while agencies are more likely to stay on the cutting-edge.

Looking at the “Other” responses (free response), we had several write-ins from both subgroups who answered that traffic and leads were important to them. One agency survey-taker brought up a good point: that what they monitor “differs by client.” We would be remiss if we did not mention the importance of vertical-specific and client-specific approaches — even if you are working in-house, and your only client is your company. From this angle, it makes sense that everyone is measuring rankings and SEO differently.

However, we would like to see a bit more clarity within the community on setting these parameters, and we hope that these results will foster that sort of discussion. Please do feel free to reply in the comments:

  • How do you measure ROI on your SEO efforts?
  • How do you show your campaigns’ value?
  • What would you change about how you’re currently measuring the success of your efforts?

So what’s next?

We’d love to hear about your experiences, in-house or agency, and how you’ve been able to demonstrate ROI on your campaigns.

We’re going to repeat this survey again next year, so stay tuned. We hope to survey a larger audience so that we can break down the groups we examine further and analyze response trends among the resulting subgroups. We wanted to do this here in this round of analysis, but were hesitant because of how small the resulting sample size would be.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Google (Almost Certainly) Has an Organic Quality Score (Or Something a Lot Like It) that SEOs Need to Optimize For – Whiteboard Friday

Posted by randfish

Entertain the idea, for a moment, that Google assigned a quality score to organic search results. Say it was based off of click data and engagement metrics, and that it would function in a similar way to the Google AdWords quality score. How exactly might such a score work, what would it be based off of, and how could you optimize for it?

While there’s no hard proof it exists, the organic quality score is a concept that’s been pondered by many SEOs over the years. In today’s Whiteboard Friday, Rand examines this theory inside and out, then offers some advice on how one might boost such a score.

Google's Organic Quality Score

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about organic quality score.

So this is a concept. This is not a real thing that we know Google definitely has. But there’s this concept that SEOs have been feeling for a long time, that similar to what Google has in their AdWords program with a paid quality score, where a page has a certain score assigned to it, that on the organic side Google almost definitely has something similar. I’ll give you an example of how that might work.

So, for example, if on my site.com I have these three — this is a very simplistic website — but I have these three subfolders: Products, Blog, and About. I might have a page in my products, 14axq.html, and it has certain metrics that Google associates with it through activity that they’ve seen from browser data, from clickstream data, from search data, and from visit data from the searches and bounces back to the search results, and all these kinds of things, all the engagement and click data that we’ve been talking about a lot this year on Whiteboard Friday.

So they may have these metrics, pogo stick rate and bounce rate and a deep click rate (the rate with which someone clicks to the site and then goes further in from that page), the time that they spend on the site on average, the direct navigations that people make to it each month through their browsers, the search impressions and search clicks, perhaps a bunch of other statistics, like whether people search directly for this URL, whether they perform branded searches. What rate do unique devices in one area versus another area do this with? Is there a bias based on geography or device type or personalization or all these kinds of things?

But regardless of that, you get this idea that Google has this sort of sense of how the page performs in their search results. That might be very different across different pages and obviously very different across different sites. So maybe this blog post over here on /blog is doing much, much better in all these metrics and has a much higher quality score as a result.

Current SEO theories about organic quality scoring:

Now, when we talk to SEOs, and I spend a lot of time talking to my fellow SEOs about theories around this, a few things emerge. I think most folks are generally of the opinion that if there is something like an organic quality score…

1. It is probably based on this type of data — queries, clicks, engagements, visit data of some kind.

We don’t doubt for a minute that Google has much more sophistication than the super-simplified stuff that I’m showing you here. I think Google publicly denies a lot of single types of metric like, “No, we don’t use time on site. Time on site could be very variable, and sometimes low time on site is actually a good thing.” Fine. But there’s something in there, right? They use some more sophisticated format of that.

2. We also are pretty sure that this is applying on three different levels:

This is an observation from experimentation as well as from Google statements which is…

  • Domain-wide, so that would be across one domain, if there are many pages with high quality scores, Google might view that domain differently from a domain with a variety of quality scores on it or one with generally low ones.
  • Same thing for a subdomain. So it could be that a subdomain is looked at differently than the main domain, or that two different subdomains may be viewed differently. If content appears to have high quality scores on this one, but not on this one, Google might generally not pass all the ranking signals or give the same weight to the quality scores over here or to the subdomain over here.
  • Same thing is true with subfolders, although to a lesser extent. In fact, this is kind of in descending order. So you can generally surmise that Google will pass these more across subfolders than they will across subdomains and more across subdomains than across root domains.

3. A higher density of good scores to bad ones can mean a bunch of good things:

  • More rankings in visibility even without other signals. So even if a page is sort of lacking in these other quality signals, if it is in this blog section, this blog section tends to have high quality scores for all the pages, Google might give that page an opportunity to rank well that it wouldn’t ordinarily for a page with those ranking signals in another subfolder or on another subdomain or on another website entirely.
  • Some sort of what we might call “benefit of the doubt”-type of boost, even for new pages. So a new page is produced. It doesn’t yet have any quality signals associated with it, but it does particularly well.

    As an example, within a few minutes of this Whiteboard Friday being published on Moz’s website, which is usually late Thursday night or very early Friday morning, at least Pacific time, I will bet that you can search for “Google organic quality score” or even just “organic quality score” in Google’s engine, and this Whiteboard Friday will perform very well. One of the reasons that probably is, is because many other Whiteboard Friday videos, which are in this same subfolder, Google has seen them perform very well in the search results. They have whatever you want to call it — great metrics, a high organic quality score — and because of that, this Whiteboard Friday that you’re watching right now, the URL that you see in the bar up above is almost definitely going to be ranking well, possibly in that number one position, even though it’s brand new. It hasn’t yet earned the quality signals, but Google assumes, it gives it the benefit of the doubt because of where it is.

  • We surmise that there’s also more value that gets passed from links, both internal and external, from pages with high quality scores. That is right now a guess, but something we hope to validate more, because we’ve seen some signs and some testing that that’s the case.

3 ways to boost your organic quality score

If this is true — and it’s up to you whether you want to believe that it is or not — even if you don’t believe it, you’ve almost certainly seen signs that something like it’s going on. I would urge you to do these three things to boost your organic quality score or whatever you believe is causing these same elements.

1. You could add more high-performing pages. So if you know that pages perform well and you know what those look like versus ones that perform poorly, you can make more good ones.

2. You can improve the quality score of existing pages. So if this one is kind of low, you’re seeing that these engagement and use metrics, the SERP click-through rate metrics, the bounce rate metrics from organic search visits, all of these don’t look so good in comparison to your other stuff, you can boost it, improve the content, improve the navigation, improve the usability and the user experience of the page, the load time, the visuals, whatever you’ve got there to hold searchers’ attention longer, to keep them engaged, and to make sure that you’re solving their problem. When you do that, you will get higher quality scores.

3. Remove low-performing pages through a variety of means. You could take a low-performing page and you might say, “Hey, I’m going to redirect that to this other page, which does a better job answering the query anyway.” Or, “Hey, I’m going to 404 that page. I don’t need it anymore. In fact, no one needs it anymore.” Or, “I’m going to no index it. Some people may need it, maybe the ones who are visitors to my website, who need it for some particular direct navigation purpose or internal purpose. But Google doesn’t need to see it. Searchers don’t need it. I’m going to use the no index, either in the meta robots tag or in the robots.txt file.”

One thing that’s really interesting to note is we’ve seen a bunch of case studies, especially since MozCon, when Britney Muller, Moz’s Head of SEO, shared the fact that she had done some great testing around removing tens of thousands of low-quality, really low-quality performing pages from Moz’s own website and seen our rankings and our traffic for the remainder of our content go up quite significantly, even controlling for seasonality and other things.

That was pretty exciting. When we shared that, we got a bunch of other people from the audience and on Twitter saying, “I did the same thing. When I removed low-performing pages, the rest of my site performed better,” which really strongly suggests that there’s something like a system in this fashion that works in this way.

So I’d urge you to go look at your metrics, go find pages that are not performing well, see what you can do about improving them or removing them, see what you can do about adding new ones that are high organic quality score, and let me know your thoughts on this in the comments.

We’ll look forward to seeing you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Does Your Blog Need Editing and Proofreading?

"These tools help you evaluate your current publishing process." – Stefanie Flaxman

You might think that I’d recommend a thorough editing and proofreading process for every blog.

But that’s not the case.

Since I don’t know enough about your blog to answer the question I pose in the headline of this article for you, I want to provide tools that will help you evaluate your own publication.

Is your content successful with your current level of editing and proofreading, or would you benefit from more substantial revisions?

I’m a loyal follower of the Socratic method, so we’re going to explore the colossal question, “Does your blog need editing and proofreading?” by asking more questions. ”</p

Posted in Latest NewsComments Off

Surviving the Social Web: 7 Things You Need to Know

"I've been online so long, I can remember when virtual community was going to save the world." – Sonia Simone

Oh, those idealistic good old days. Back when we truly believed that the global digital community would fact-check lies, make us smarter, and force our institutions to serve the greater good.

As the man said, “How’s that working out for us?”

It turns out that the social media utopia, like other utopias, didn’t end up as rosy as we’d hoped — mainly because it’s made of human beings.

But the social web is still an extraordinary tool. The ability to instantly communicate with thousands of people isn’t to be scoffed at — if you can do it without losing your mind.

I’ve been using social media since 1989. The remarkable thing for me isn’t what’s changed … it’s what’s stayed the same. Here are some of my survival tips from decades in the digital realm.

#1: Watch out for the ant-shakers

Remember ant farms? These were glass cases filled with sand or gel, where you could watch ants building tunnels and carrying things back and forth.

In grade school we all had that one mean friend who would shake it hard, just to destroy the tunnels and watch the ants scurrying around trying to fix the mess.

Every one of those ant-shakers got a Facebook account when they grew up.

Some people just crave chaos — and if they can’t find it, they create it. There’s always a storm brewing around them, some bitter flame war that pits half the community against the other half. It doesn’t seem to occur to them that the pain and anger they cause are real emotions attached to real people. Either they can’t see it or they don’t care.

Keep an eye out for the ant-shakers. A lot of them are attracted to the web, and spend a disproportionate amount of time there. They’re at the center of endless dust-ups, and it may take you some time to realize they’re engineering them.

Putting distance between yourself and the ant-shakers — even if (especially if) you’re related — will calm your social media experience down considerably.

#2: Realize that digital privacy is a lie

When we socialize over the web, we tend to reveal a lot. It can feel like a small, intimate space. After all, we’re sitting there on the sofa with our laptops, and we recognize those names that fly by, even if we might never have met them face to face.

Every day, I see people starting a post with something like — “I’ve never told anyone this before, not even my family” — and they’re sharing in a Facebook group with four million members.

Digital privacy depends on the goodwill of every person who has access to the material. Anyone can screenshot anything. Once they have, you have very little control over what they do with it.

In the real world, that means that digital privacy is a complete illusion.

If you aren’t willing to make it public, don’t share it on the web. Not in a private group, not on Snapchat, not in email.

Rather than trying to make these decisions on the fly, decide in advance what kinds of material you will — and won’t — share. There’s no one set of rules that will suit everyone — it’s really about your own comfort zone.

But it may clarify your thinking to ask yourself how you’ll feel if your mom, your boss, and a professional identity thief can see a particular type of content you’re sharing. Because chances are, eventually, all three of them will.

#3: If you’re in business, act like it

You may not feel particularly social about social media … maybe you’re there to promote a business or product.

Nothing wrong with that, if you handle it well.

A stream of pitches gets obnoxious fast. Trust me, your friends don’t want to buy your essential oils, nutrition shakes, skincare, or whatever the latest thing is. And they desperately wish you would stop trying to push it onto them.

Quit trying to spam your friends (it isn’t working), and start acting like a business.

Get a business account or page. Be clear about your purpose there — to sell something you believe is valuable. Educate yourself about real marketing — the kind that reaches people you didn’t go to high school with. (We have free resources to help with that.)

Promote content at least 10 times as often as you promote a product. “Content” is the stuff that most people are on the social web to look at and share — useful and interesting images, videos, articles, and audio.

Social media is an amazing way to get business-oriented content shared — either for free or for a very moderate cost. You can focus on organic reach, paid advertising, or a mix, depending on the platform and your resources.

#4: Seek (and create) smaller communities

Remember that four-million strong group I mentioned on Facebook? It’s got great energy … and it’s almost completely unmanageable.

The large common spaces on the web can be fascinating, but they’re also exhausting. For a greater sense of community, more useable information, and better connections, look for smaller groups.

Groups that are too small will run out of steam — there’s definitely a point of critical mass. But smallish online groups can be nurturing, delightful little communities.

If there isn’t a group like that in your topic — maybe you’re the right person to start one. It will be a lot of work (and you’ll probably have to manage a few ant-shakers), but it can also be wonderfully rewarding.

#5: Manage your time

Here’s the great, big, gigantic problem with social media — it will eat every minute of your life if you let it.

There’s always another great conversation. And there’s always another opportunity to explain to someone how wrong they are.

I’ve taken a tip from Cal Newport and I schedule my social media time. And because I have no self-control (and I prefer to use what I do have on other things), I use an app to manage that.

There are quite a few of these out there that will block certain sites at certain times, so you can be a productive member of human society. I’m partial to Freedom — it’s a paid app, but it has a flexibility I find highly useful.

#6: Mind your manners

This seems like it would be obvious, but we all blow it from time to time.

Be a kind, respectful, and polite person when you’re online. (Offline would be great too, of course.)

Don’t say ugly things you don’t mean. Don’t say ugly things you do mean.

Your extensive collection of racist knock-knock jokes isn’t funny. Never was, isn’t now.

Condescension and the attitude that you are entitled to other people’s time are as unpopular on the web as they are in real life.

Good manners are free, and they can open amazing doors … especially as they become rarer.

#7: Know when you need to back away

I’ve been online so long, I can remember when virtual community was going to save the world.

Now we know better. Over the years, I’ve realized that no one has to be on social media. Even social media managers could presumably find a different way to make a living. If it’s diminishing your life, you can change how you use it. You can also decide to go without it.

Sometimes I need to implement what I call the FFS rule. When I find myself muttering, “Oh FFS” (Google it if you need to), it’s time to log off.

People are irritating, and some of them are mean. Those people consistently get meaner and more irritating on the web.

Block and report trolls. Remember that you don’t have to reply to everything.

Dan Kennedy, of all people, had some rather good advice about this years ago. He wasn’t talking about social media, but he could have been.

“If I wake up three mornings thinking about you, and I’m not having sex with you, you’ve got to go.”

Pretty savvy social media advice from a guy who refuses to use email. Because it turns out, what tends to work well in social media … is what works well in real life.

The post Surviving the Social Web: 7 Things You Need to Know appeared first on Copyblogger.


Copyblogger

Posted in Latest NewsComments Off

Why You Don’t Need to Be a Thought Leader

"Saying 'thought leadership' instead of influence has always reminded me of Homer Simpson calling his garage a 'car hole.'" – Sonia Simone

We all want to get traffic to our websites. We want to build audiences who are interested in what we have to say and responsive to our offers.

And so it’s natural to think that we should become “thought leaders.” (Or, to push the expression a little further down Jargon Lane, “thought leaders in our space.”)

Perhaps even more coveted than “going viral,” thought leadership is that elusive, glittering prize — the Golden Snitch of web publishing.

Most of us (I hope) know better than to self-identify as thought leaders. But we think it would be kind of great if other people started calling us that.

I’m not buying it. And here’s why.

First, the petty part: I just hate the term. It’s a clumsy verbal construct that has no need to exist.

Saying “thought leadership” instead of influence has always reminded me of Homer Simpson calling his garage a “car hole.”

But I have real reasons, too.

Let me be clear: I think it’s smart to publish the kind of content that people pay attention to. I think it’s smart to publish good advice. I think it’s smart to be smart.

But thought leadership implies that you have some kind of shiny, new insight that no one has articulated before. To be a thought leader, what you’re saying can’t just be interesting, well-reasoned, and useful — it has to be new.

Novelty is not wisdom

Allow me to propose a radical notion:

We don’t actually need a bunch of new thoughts. We need to pursue and implement the existing thoughts that make sense.

I’m not talking about innovation in technology … that’s going to happen whether we have “thought leaders” or not.

I’m talking about people who claim completely new insights about how the world fundamentally works — whether it’s health, business, the environment, or anything else we care about.

Most thought leaders create novelty in one of two ways.

The first is to repackage old advice in a sparkly new wrapper. Marketers have done this forever, and I don’t actually have a problem with it. New wrappers make things more interesting, and that gets us to pay fresh attention to those darned fundamentals.

But don’t kid yourself and think it makes you a thought leader. It makes you a good teacher. Which is better, because it’s useful.

The other way, of course, is to make up some nonsense.

Tell us all about how the future will belong to left-handed people, that in 2030 the global economy will be based on bacon, or that you’ve identified breakthrough, new research showing that eating nothing but transparent food will make you 17.684 times more intelligent.

If you are in possession of special, unique wisdom that no one else knows about, either you’ve dressed some old wisdom in a new suit or you are pushing a great big pile of BS.

And by the way …

Every expert you know is wrong about something

My other problem with thought leaders is that their audiences start to see them as cult leaders.

I’ll never forget reading some guy’s 50-line-long comment on a Tim Ferriss blog post, asking about what and when he should eat to correspond with variations in the timing of this person’s bowel activity.

This is literally a person asking Tim Ferriss how often he should take a shit.

We expect an authority to be smart about their topic. Economic authorities should be smart about the economy. Nutrition authorities should be smart about nutrition. And so forth.

We expect thought leaders to be quasi-religious figures, blessing us with their deep thoughts and profound insights, and showing us their unique sacred path to life, liberty, and the pursuit of happiness.

Implicit in this idea of a thought leader is the notion that you need someone to tell you how and what to think. And that, frankly, is a terrible idea.

Thought leadership is a bubble

My other issue with thought leadership is that it’s a catchphrase for a bubble that doesn’t need to be reinforced.

The world is made up of a lot of different kinds of people. They come from different places; they look different; they do different things on their days off; they have different family lives and social circles and work histories.

But thought leaders all look eerily alike.

Do we really need more Business Insider types telling us how the world works? Could we maybe hear from some people who don’t have the exact same CV, the same vocabulary, the same haircut, and the same sports jacket?

Might it not be useful to determine our paths for ourselves, based on our own observations and intelligence, reflecting our individual experiences, striving to see the larger picture, and weighing the informed opinions of actual authorities who back their assertions with credible evidence?

We don’t need thought leadership … we need leadership

Thought leaders strive for new ideas. Leaders strive for good ideas.

You don’t need someone to tell you what to think. I trust you to have that covered.

Your audience doesn’t need it, either. They’re smart. But they have questions, and you can help with that.

I believe it’s useful to step up and share your experience. I find it’s massively useful when someone who has done something difficult talks about what they’ve learned along the way.

I believe in expertise. Some people are better at a given skill than others. Usually because they have a lot of practice doing it.

I believe that most of us have days when our confidence fails, and we can use a pep talk.

And I believe that it’s powerful to let people know what you believe in. Not because you’re telling them to believe the same way, but because you’re inviting those who do to walk with you.

So, what if you actually come up with a new idea?

New ideas do actually come up sometimes. Maybe you’ll come up with one of them.

If you have a new perspective or insight, and it’s supported by credible evidence, that can be a powerful thing.

Write about it. Question it. Investigate it. Teach it. Promote it.

Just like you do with all the good advice you offer. Whether your idea is good or bad doesn’t depend on an overused label.

The world doesn’t need you to chase after some empty notion of thought leadership.

Leading your audience with your expertise, your confidence, your integrity, and your passion for their well-being is enough.

The post Why You Don’t Need to Be a Thought Leader appeared first on Copyblogger.


Copyblogger

Posted in Latest NewsComments Off

Advert