Tag Archive | "Keyword"

New Keyword Tool

Our keyword tool is updated periodically. We recently updated it once more.

For comparison sake, the old keyword tool looked like this

Whereas the new keyword tool looks like this

The upsides of the new keyword tool are:

  • fresher data from this year
  • more granular data on ad bids vs click prices
  • lists ad clickthrough rate
  • more granular estimates of Google AdWords advertiser ad bids
  • more emphasis on commercial oriented keywords

With the new columns of [ad spend] and [traffic value] here is how we estimate those.

  • paid search ad spend: search ad clicks * CPC
  • organic search traffic value: ad impressions * 0.5 * (100% – ad CTR) * CPC

The first of those two is rather self explanatory. The second is a bit more complex. It starts with the assumption that about half of all searches do not get any clicks, then it subtracts the paid clicks from the total remaining pool of clicks & multiplies that by the cost per click.

The new data also has some drawbacks:

  • Rather than listing search counts specifically it lists relative ranges like low, very high, etc.
  • Since it tends to tilt more toward keywords with ad impressions, it may not have coverage for some longer tail informational keywords.

For any keyword where there is insufficient coverage we re-query the old keyword database for data & merge it across. You will know if data came from the new database if the first column says something like low or high & the data came from the older database if there are specific search counts in the first column

For a limited time we are still allowing access to both keyword tools, though we anticipate removing access to the old keyword tool in the future once we have collected plenty of feedback on the new keyword tool. Please feel free to leave your feedback in the below comments.

One of the cool features of the new keyword tools worth highlighting further is the difference between estimated bid prices & estimated click prices. In the following screenshot you can see how Amazon is estimated as having a much higher bid price than actual click price, largely because due to low keyword relevancy entities other than the official brand being arbitraged by Google require much higher bids to appear on competing popular trademark terms.

Historically, this difference between bid price & click price was a big source of noise on lists of the most valuable keywords.

Recently some advertisers have started complaining about the “Google shakedown” from how many brand-driven searches are simply leaving the .com part off of a web address in Chrome & then being forced to pay Google for their own pre-existing brand equity.

Categories: 

SEO Book

Posted in Latest NewsComments Off

Here’s how Google Ads’ new keyword selection preferences work

A look at the potential impact of same-meaning close variants for exact match, phrase match and broad match modifier on your keyword matching.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

Using STAT: How to Uncover Additional Value in Your Keyword Data

Posted by TheMozTeam

Changing SERP features and near-daily Google updates mean that single keyword strategies are no longer viable. Brands have a lot to keep tabs on if they want to stay visible and keep that coveted top spot on the SERP.

That’s why we asked Laura Hampton, Head of Marketing at Impressionto share some of the ways her award-winning team leverages STAT to surface all kinds of insights to make informed decisions.

Snag her expert tips on how to uncover additional value in your keyword data — including how Impression’s web team uses STAT’s API to improve client reporting, how to spot quick wins with dynamic tags, and what new projects they have up their sleeves. Take it away, Laura!

Spotting quick wins 

We all remember the traditional CTR chart. It suggests that websites ranking in position one on the SERPs can expect roughly 30 percent of the clicks available, with position two getting around 12 percent, position three seeing six percent, and so on (disclaimer: these may not be the actual numbers but, let’s face it, this formula is way outdated at this point anyway).

Today, the SERP landscape has changed, so we know that the chances of any of the above-suggested numbers being correct are minimal — especially when you consider the influence of elements like featured snippets on click-through rates.

But the practical reality remains that if you can improve your ranking position, it’s highly likely you’ll get at least some uplift in traffic for that term. This is where STAT’s dynamic tags can really help. Dynamic tags are a special kind of tag that automatically populates keywords based on changeable filter criteria.

We like to set up dynamic tags based on ranking position. We use this to flag keywords which are sitting just outside of the top three, top five, or top 10 positions. Layer into this some form of traffic benchmark, and you can easily uncover keywords with decent traffic potential that just need an extra bit of work to tip them into a better position.

Chasing position zero with featured snippets and PAAs 

There’s been a lot of chat in our industry about the growing prevalence of SERP features like featured snippets and “People also ask” (PAA) boxes. In fact, STAT has been instrumental in leading much of the research into the influence of these two SERP features on brand visibility and CTRs.

If your strategy includes a hunt for the coveted position zero, you’re in luck. We like to use STAT’s dynamic tagging feature to monitor the keywords that result in featured snippets. This way, we can track keywords where our client owns the snippet and where they don’t. We can also highlight new opportunities to create optimized content and attempt to capture the spot from their competitors.

This also really helps guide our overall content strategy, since STAT is able to provide quick feedback on the type of content (and, therefore, the assumed intent) that will perform best amongst a keyword set.

Making use of data views 

Data views are one of the most fundamental elements of STAT. They are tools that allow you to organize your data in ways that are meaningful to you. Holding multiple keyword segments (tags) and producing aggregate metrics, they make it possible for us to dissect keyword information and then implement strategically driven decisions.

For us at Impression, data views are essential. They reflect the tactical aspirations of the client. While you could create a single templated dashboard for all your clients with the same data views, our strategists will often set up data views that mirror the way each client and account work.

Even if we’re not yet actively working on a keyword set, we usually create data views to enable us to quickly spot opportunities and report back on the strategic progression.

Here are just some of the data views we’ve grouped our keyword segments into:

The conversion funnel

Segmenting keywords into the stages of the conversion funnel is a fairly common strategy for search marketers — it makes it possible to focus in on and prioritize higher intent queries and then extrapolate out.

Many of our data views are set up to monitor keywords tagged as “conversion,” “education,” and “awareness.”

Client goals

Because we believe successful search marketing is only possible when it integrates with wider business goals, we like to spend time getting to know our clients’ audiences, as well as their specific niches and characteristics.

This way, we can split our keywords into those which reflect the segments that our clients wish to target. For example, in some cases, this is based on sectors, such as our telecommunications client who targets audiences in finance, marketing, IT, and general business. In others, it’s based on locations, in which case we’ll leverage STAT’s location capabilities to track the visibility of our clients to different locales.

Services and/or categories

For those clients who sell online — whether it’s products or services — data views are a great way to track their visibility within each service area or product category.

Our own dashboard (for Impression) uses this approach to split out our service-based keywords, so our data view is marked “Services” and the tags we track within are “SEO,” “PPC,” “web,” and so on. For one of our fashion clients, the data view relates to product categories, where the tracked tags include “footwear,” “accessories,” and “dresses.”

At-a-glance health monitoring

A relatively new feature in STAT allows us to see the performance of tags compared to one another: the Tags tab.

Because we use data views and tags a lot, this has been a neat addition for us. The ability to quickly view those tags and how the keywords within are progressing is immensely valuable.

Let’s use an example from above. For Impression’s own keyword set, one data view contains tags that represent different service offerings. When we click on that data view and choose “Tags” in the tabbed options, we can see how well each service area is performing in terms of its visibility online.

This means we can get very quick strategic insights that say our ranking positions for SEO are consistently pretty awesome, while those around CRO (which we are arguably less well known for), tend to fluctuate more. We can also make a quick comparison between them thanks to the layout of the tab.

Identifying keyword cannibalization risk through duplicate landing pages 

While we certainly don’t subscribe to any notion of a content cannibalization penalty per se, we do believe that having multiple landing pages for one keyword or keyword set is problematic.

That’s where STAT can help. We simply filter the keywords table to show a given landing page and we’re able to track instances where it’s ranking for multiple keywords.

By exporting that information, we can then compare the best and worst ranking URLs. We can also highlight where the ranking URL for a single keyword has changed, signaling internal conflict and, therefore, an opportunity to streamline and improve.

Monitoring the competitive landscape 

No search strategy is complete without an understanding of the wider search landscape. Specifically, this means keeping track of your and/or your client’s rankings when compared to others ranking around them.

We like to use STAT’s Competitive Landscape tab to view this information for a specific data view, or across the whole account. In particular, the Share of Voice: Current Leaders board tells us very quickly who we’re up against for a keyword set.

This leads to insights such as the competitiveness of the keyword set, which makes it easier to set client expectations. It also surfaces relevance of the keywords tracked, where, if the share of voice is going to brands that aren’t your own, it may indicate the keywords you’re targeting are not that relevant to your own audience.

You can also take a look at the Share of Voice: Top 10 Trending to see where competitors are increasing or decreasing their visibility. This can be indicative of changes on the SERPs for that industry, or in the industry as a whole.

Creating a custom connector for GDS 

Reporting is a fundamental part of agency life. Our clients appreciate formalized insights into campaign progression (on top of regular communications throughout the month, of course) and one of our main challenges in growing our agency lies in identifying the best way to display reports.

We’ll be honest here: There was a point where we had started to invest in building our own platform, with all sorts of aspirations of bespoke builds and highly branded experiences that could tie into a plethora of other UX considerations for our clients.

But at the same time, we’re also big believers that there’s no point in trying to reinvent the wheel if an appropriate solution already exists. So, we decided to use Google Data Studio (GDS) as it was released in Beta and moved onto the platform in 2017.

Of course, ranking data — while we’d all like to reserve it for internal insight to drive bigger goals — is always of interest to clients. At the time, the STAT API was publicly available, but there was no way to pull data into GDS.

That’s why we decided to put some of our own time into creating a GDS connector for STAT. Through this connector, we’re able to pull in live data to our GDS reports, which can be easily shared with our clients. It was a relatively straightforward process and, because GDS caches the data for a short amount of time, it doesn’t hammer the STAT API for every request.

Though our clients do have access to STAT (made possible through their granular user permissions), the GDS integration is a simpler way for them to see top-level stats at a glance.

We’re in the process of building pipelines through BigQuery to feed into this and facilitate date specific tracking in GDS too — keep an eye out for more info and get access to the STAT GDS connector here.

Want more? 

Ready to learn how to get cracking and tracking some more? Reach out to our rad team and request a demo to get your very own tailored walkthrough of STAT. 

If you’re attending MozCon this year, you can see the ins and outs of STAT in person — grab your ticket before they’re all gone! 

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

How to Automate Keyword Ranking with STAT and Google Data Studio

Posted by TheMozTeam

This blog post was originally published on the STAT blog.


We asked SEO Analyst, Emily Christon of Ovative Group, to share how her team makes keyword rank reporting easy and digestible for all stakeholders. Read on to see how she combines the power of STAT with Google Data Studio for streamlined reporting that never fails to impress her clients.

Why Google Data Studio

Creating reports for your clients is a vital part of SEO. It’s also one of the most daunting and time-consuming tasks. Your reports need to contain all the necessary data, while also having clear visuals, providing quick wins, and being easy to understand.

At Ovative Group, we’re big advocates for reporting tools that save time and make data easier to understand. This is why we love Google Data Studio.

This reporting tool was created with the user in mind and allows for easy collaboration and sharing with teams. It’s also free, and its reporting dashboard is designed to take the complexity and stress out of visualizing data.

Don’t get us wrong. We still love our spreadsheets, but tools like Excel aren’t ideal for building interactive dashboards. They also don’t allow for easy data pulls — you have to manually add your data, which can eat up a lot of time and cause a lot of feelings.

Data Studio, however, pulls all your data into one place from multiple sources, like spreadsheets, Google Analytics accounts, and Adwords. You can then customize how all that data is viewed so you can surface quick insights.

How does this relate to keyword reporting?

Creating an actionable keyword report that is beneficial for both SEO and your stakeholders can be a challenge. Data Studio makes things a bit easier for us at Ovative in a variety of ways:

Automated data integration

Our team uses the STAT API — which can be connected to Data Studio through a little technical magic and Google Big Query — to pull in all our raw data. You can select what data points you want to be collected from the API, including rank, base rank, competitors, search volume, local information, and more.

Once your data is collected and living in Big Query, you can access it through the Data Studio interface. If you want to learn more about STAT’s API, go here.

Customization

Do you care about current rank? Rank over time? Major movers – those that changed +20 positions week over week? Or are you just after how many keywords you have ranking number one?

All of this is doable — and easy — once you’re comfortable in Data Studio. You can easily customize your reports to match your goals.

“Our team uses the STAT API — which can be connected to Data Studio through a little technical magic and Google Big Query — to pull in all our raw data.” — Emily Christon, SEO Analyst at Ovative Group

Custom dashboards make reporting and insights efficient and client-facing, transforming all that raw data into easy-to-understand metrics, which tell a more compelling story.

How to build your custom Google Data Studio 

There are a myriad of ways to leverage Google Data Studio for major insights. Here are just a few features we use to help visualize our data.

Keyword rank

This report gives you a snapshot of how many keywords you have in each ranking group and how things are trending. You can also scroll through your list of keywords to see what the traffic-driving queries are.

One cool feature of Data Studio when it comes to rank is period over period comparisons. For example, if you set the date range to the previous week, it will automatically pull week over week rank change. If you set the date range to the previous month, it pulls a month over month rank change.

At Ovative, we do weekly, monthly, and yearly keyword rank change reporting.

Keyword look-up tool

If you notice that traffic has declined in a specific keyword set, pop down to the keyword look-up tool to track rank trends over time. This view is extremely helpful — it shows the progress or decline of rank to help explain traffic variability.

Campaign or priority tracker

To support newly launched pages or priority keywords, create a separate section just for these keywords. This will make it easy for you to quickly check the performance and trends of chosen keyword sets.

What’s next? 

Google Data Studio is only as powerful as you make it.

The STAT API integration in Google Data Studio represents one page of our typical client’s reporting studio; we make sure to add in a page for top-level KPI trends, a page for Search Console keyword performance, and other relevant sources for ease of use for ourselves and the client.

Want more? 

Want to dive deeper into STAT? Got questions about our API? You can book a demo with us and get a personalized walk through. 

You can also chat with our rad team at MozCon this July 15–17 to see how you can go seriously deep with your data. Ask about our specialty API — two additional services to give you everything a 100-result SERP has to offer, and perfect if you’ve built your own connector.

Grab my MozCon ticket now!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Keyword Not Provided, But it Just Clicks

When SEO Was Easy

When I got started on the web over 15 years ago I created an overly broad & shallow website that had little chance of making money because it was utterly undifferentiated and crappy. In spite of my best (worst?) efforts while being a complete newbie, sometimes I would go to the mailbox and see a check for a couple hundred or a couple thousand dollars come in. My old roommate & I went to Coachella & when the trip was over I returned to a bunch of mail to catch up on & realized I had made way more while not working than what I spent on that trip.

What was the secret to a total newbie making decent income by accident?

Horrible spelling.

Back then search engines were not as sophisticated with their spelling correction features & I was one of 3 or 4 people in the search index that misspelled the name of an online casino the same way many searchers did.

The high minded excuse for why I did not scale that would be claiming I knew it was a temporary trick that was somehow beneath me. The more accurate reason would be thinking in part it was a lucky fluke rather than thinking in systems. If I were clever at the time I would have created the misspeller’s guide to online gambling, though I think I was just so excited to make anything from the web that I perhaps lacked the ambition & foresight to scale things back then.

In the decade that followed I had a number of other lucky breaks like that. One time one of the original internet bubble companies that managed to stay around put up a sitewide footer link targeting the concept that one of my sites made decent money from. This was just before the great recession, before Panda existed. The concept they targeted had 3 or 4 ways to describe it. 2 of them were very profitable & if they targeted either of the most profitable versions with that page the targeting would have sort of carried over to both. They would have outranked me if they targeted the correct version, but they didn’t so their mistargeting was a huge win for me.

Search Gets Complex

Search today is much more complex. In the years since those easy-n-cheesy wins, Google has rolled out many updates which aim to feature sought after destination sites while diminishing the sites which rely one “one simple trick” to rank.

Arguably the quality of the search results has improved significantly as search has become more powerful, more feature rich & has layered in more relevancy signals.

Many quality small web publishers have went away due to some combination of increased competition, algorithmic shifts & uncertainty, and reduced monetization as more ad spend was redirected toward Google & Facebook. But the impact as felt by any given publisher is not the impact as felt by the ecosystem as a whole. Many terrible websites have also went away, while some formerly obscure though higher-quality sites rose to prominence.

There was the Vince update in 2009, which boosted the rankings of many branded websites.

Then in 2011 there was Panda as an extension of Vince, which tanked the rankings of many sites that published hundreds of thousands or millions of thin content pages while boosting the rankings of trusted branded destinations.

Then there was Penguin, which was a penalty that hit many websites which had heavily manipulated or otherwise aggressive appearing link profiles. Google felt there was a lot of noise in the link graph, which was their justification for the Penguin.

There were updates which lowered the rankings of many exact match domains. And then increased ad load in the search results along with the other above ranking shifts further lowered the ability to rank keyword-driven domain names. If your domain is generically descriptive then there is a limit to how differentiated & memorable you can make it if you are targeting the core market the keywords are aligned with.

There is a reason eBay is more popular than auction.com, Google is more popular than search.com, Yahoo is more popular than portal.com & Amazon is more popular than a store.com or a shop.com. When that winner take most impact of many online markets is coupled with the move away from using classic relevancy signals the economics shift to where is makes a lot more sense to carry the heavy overhead of establishing a strong brand.

Branded and navigational search queries could be used in the relevancy algorithm stack to confirm the quality of a site & verify (or dispute) the veracity of other signals.

Historically relevant algo shortcuts become less appealing as they become less relevant to the current ecosystem & even less aligned with the future trends of the market. Add in negative incentives for pushing on a string (penalties on top of wasting the capital outlay) and a more holistic approach certainly makes sense.

Modeling Web Users & Modeling Language

PageRank was an attempt to model the random surfer.

When Google is pervasively monitoring most users across the web they can shift to directly measuring their behaviors instead of using indirect signals.

Years ago Bill Slawski wrote about the long click in which he opened by quoting Steven Levy’s In the Plex: How Google Thinks, Works, and Shapes our Lives

“On the most basic level, Google could see how satisfied users were. To paraphrase Tolstoy, happy users were all the same. The best sign of their happiness was the “Long Click” — This occurred when someone went to a search result, ideally the top one, and did not return. That meant Google has successfully fulfilled the query.”

Of course, there’s a patent for that. In Modifying search result ranking based on implicit user feedback they state:

user reactions to particular search results or search result lists may be gauged, so that results on which users often click will receive a higher ranking. The general assumption under such an approach is that searching users are often the best judges of relevance, so that if they select a particular search result, it is likely to be relevant, or at least more relevant than the presented alternatives.

If you are a known brand you are more likely to get clicked on than a random unknown entity in the same market.

And if you are something people are specifically seeking out, they are likely to stay on your website for an extended period of time.

One aspect of the subject matter described in this specification can be embodied in a computer-implemented method that includes determining a measure of relevance for a document result within a context of a search query for which the document result is returned, the determining being based on a first number in relation to a second number, the first number corresponding to longer views of the document result, and the second number corresponding to at least shorter views of the document result; and outputting the measure of relevance to a ranking engine for ranking of search results, including the document result, for a new search corresponding to the search query. The first number can include a number of the longer views of the document result, the second number can include a total number of views of the document result, and the determining can include dividing the number of longer views by the total number of views.

Attempts to manipulate such data may not work.

safeguards against spammers (users who generate fraudulent clicks in an attempt to boost certain search results) can be taken to help ensure that the user selection data is meaningful, even when very little data is available for a given (rare) query. These safeguards can include employing a user model that describes how a user should behave over time, and if a user doesn’t conform to this model, their click data can be disregarded. The safeguards can be designed to accomplish two main objectives: (1) ensure democracy in the votes (e.g., one single vote per cookie and/or IP for a given query-URL pair), and (2) entirely remove the information coming from cookies or IP addresses that do not look natural in their browsing behavior (e.g., abnormal distribution of click positions, click durations, clicks_per_minute/hour/day, etc.). Suspicious clicks can be removed, and the click signals for queries that appear to be spmed need not be used (e.g., queries for which the clicks feature a distribution of user agents, cookie ages, etc. that do not look normal).

And just like Google can make a matrix of documents & queries, they could also choose to put more weight on search accounts associated with topical expert users based on their historical click patterns.

Moreover, the weighting can be adjusted based on the determined type of the user both in terms of how click duration is translated into good clicks versus not-so-good clicks, and in terms of how much weight to give to the good clicks from a particular user group versus another user group. Some user’s implicit feedback may be more valuable than other users due to the details of a user’s review process. For example, a user that almost always clicks on the highest ranked result can have his good clicks assigned lower weights than a user who more often clicks results lower in the ranking first (since the second user is likely more discriminating in his assessment of what constitutes a good result). In addition, a user can be classified based on his or her query stream. Users that issue many queries on (or related to) a given topic T (e.g., queries related to law) can be presumed to have a high degree of expertise with respect to the given topic T, and their click data can be weighted accordingly for other queries by them on (or related to) the given topic T.

Google was using click data to drive their search rankings as far back as 2009. David Naylor was perhaps the first person who publicly spotted this. Google was ranking Australian websites for [tennis court hire] in the UK & Ireland, in part because that is where most of the click signal came from. That phrase was most widely searched for in Australia. In the years since Google has done a better job of geographically isolating clicks to prevent things like the problem David Naylor noticed, where almost all search results in one geographic region came from a different country.

Whenever SEOs mention using click data to search engineers, the search engineers quickly respond about how they might consider any signal but clicks would be a noisy signal. But if a signal has noise an engineer would work around the noise by finding ways to filter the noise out or combine multiple signals. To this day Google states they are still working to filter noise from the link graph: “We continued to protect the value of authoritative and relevant links as an important ranking signal for Search.”

The site with millions of inbound links, few intentional visits & those who do visit quickly click the back button (due to a heavy ad load, poor user experience, low quality content, shallow content, outdated content, or some other bait-n-switch approach)…that’s an outlier. Preventing those sorts of sites from ranking well would be another way of protecting the value of authoritative & relevant links.

Best Practices Vary Across Time & By Market + Category

Along the way, concurrent with the above sorts of updates, Google also improved their spelling auto-correct features, auto-completed search queries for many years through a featured called Google Instant (though they later undid forced query auto-completion while retaining automated search suggestions), and then they rolled out a few other algorithms that further allowed them to model language & user behavior.

Today it would be much harder to get paid above median wages explicitly for sucking at basic spelling or scaling some other individual shortcut to the moon, like pouring millions of low quality articles into a (formerly!) trusted domain.

Nearly a decade after Panda, eHow’s rankings still haven’t recovered.

Back when I got started with SEO the phrase Indian SEO company was associated with cut-rate work where people were buying exclusively based on price. Sort of like a “I got a $ 500 budget for link building, but can not under any circumstance invest more than $ 5 in any individual link.” Part of how my wife met me was she hired a hack SEO from San Diego who outsourced all the work to India and marked the price up about 100-fold while claiming it was all done in the United States. He created reciprocal links pages that got her site penalized & it didn’t rank until after she took her reciprocal links page down.

With that sort of behavior widespread (hack US firm teaching people working in an emerging market poor practices), it likely meant many SEO “best practices” which were learned in an emerging market (particularly where the web was also underdeveloped) would be more inclined to being spammy. Considering how far ahead many Western markets were on the early Internet & how India has so many languages & how most web usage in India is based on mobile devices where it is hard for users to create links, it only makes sense that Google would want to place more weight on end user data in such a market.

If you set your computer location to India Bing’s search box lists 9 different languages to choose from.

The above is not to state anything derogatory about any emerging market, but rather that various signals are stronger in some markets than others. And competition is stronger in some markets than others.

Search engines can only rank what exists.

“In a lot of Eastern European – but not just Eastern European markets – I think it is an issue for the majority of the [bream? muffled] countries, for the Arabic-speaking world, there just isn’t enough content as compared to the percentage of the Internet population that those regions represent. I don’t have up to date data, I know that a couple years ago we looked at Arabic for example and then the disparity was enormous. so if I’m not mistaken the Arabic speaking population of the world is maybe 5 to 6%, maybe more, correct me if I am wrong. But very definitely the amount of Arabic content in our index is several orders below that. So that means we do not have enough Arabic content to give to our Arabic users even if we wanted to. And you can exploit that amazingly easily and if you create a bit of content in Arabic, whatever it looks like we’re gonna go you know we don’t have anything else to serve this and it ends up being horrible. and people will say you know this works. I keyword stuffed the hell out of this page, bought some links, and there it is number one. There is nothing else to show, so yeah you’re number one. the moment somebody actually goes out and creates high quality content that’s there for the long haul, you’ll be out and that there will be one.” – Andrey Lipattsev – Search Quality Senior Strategist at Google Ireland, on Mar 23, 2016


Impacting the Economics of Publishing

Now search engines can certainly influence the economics of various types of media. At one point some otherwise credible media outlets were pitching the Demand Media IPO narrative that Demand Media was the publisher of the future & what other media outlets will look like. Years later, after heavily squeezing on the partner network & promoting programmatic advertising that reduces CPMs by the day Google is funding partnerships with multiple news publishers like McClatchy & Gatehouse to try to revive the news dead zones even Facebook is struggling with.

“Facebook Inc. has been looking to boost its local-news offerings since a 2017 survey showed most of its users were clamoring for more. It has run into a problem: There simply isn’t enough local news in vast swaths of the country. … more than one in five newspapers have closed in the past decade and a half, leaving half the counties in the nation with just one newspaper, and 200 counties with no newspaper at all.”

As mainstream newspapers continue laying off journalists, Facebook’s news efforts are likely to continue failing unless they include direct economic incentives, as Google’s programmatic ad push broke the banner ad:

“Thanks to the convoluted machinery of Internet advertising, the advertising world went from being about content publishers and advertising context—The Times unilaterally declaring, via its ‘rate card’, that ads in the Times Style section cost $ 30 per thousand impressions—to the users themselves and the data that targets them—Zappo’s saying it wants to show this specific shoe ad to this specific user (or type of user), regardless of publisher context. Flipping the script from a historically publisher-controlled mediascape to an advertiser (and advertiser intermediary) controlled one was really Google’s doing. Facebook merely rode the now-cresting wave, borrowing outside media’s content via its own users’ sharing, while undermining media’s ability to monetize via Facebook’s own user-data-centric advertising machinery. Conventional media lost both distribution and monetization at once, a mortal blow.”

Google is offering news publishers audience development & business development tools.

Heavy Investment in Emerging Markets Quickly Evolves the Markets

As the web grows rapidly in India, they’ll have a thousand flowers bloom. In 5 years the competition in India & other emerging markets will be much tougher as those markets continue to grow rapidly. Media is much cheaper to produce in India than it is in the United States. Labor costs are lower & they never had the economic albatross that is the ACA adversely impact their economy. At some point the level of investment & increased competition will mean early techniques stop having as much efficacy. Chinese companies are aggressively investing in India.

“If you break India into a pyramid, the top 100 million (urban) consumers who think and behave more like Americans are well-served,” says Amit Jangir, who leads India investments at 01VC, a Chinese venture capital firm based in Shanghai. The early stage venture firm has invested in micro-lending firms FlashCash and SmartCoin based in India. The new target is the next 200 million to 600 million consumers, who do not have a go-to entertainment, payment or ecommerce platform yet— and there is gonna be a unicorn in each of these verticals, says Jangir, adding that it will be not be as easy for a player to win this market considering the diversity and low ticket sizes.

RankBrain

RankBrain appears to be based on using user clickpaths on head keywords to help bleed rankings across into related searches which are searched less frequently. A Googler didn’t state this specifically, but it is how they would be able to use models of searcher behavior to refine search results for keywords which are rarely searched for.

In a recent interview in Scientific American a Google engineer stated: “By design, search engines have learned to associate short queries with the targets of those searches by tracking pages that are visited as a result of the query, making the results returned both faster and more accurate than they otherwise would have been.”

Now a person might go out and try to search for something a bunch of times or pay other people to search for a topic and click a specific listing, but some of the related Google patents on using click data (which keep getting updated) mentioned how they can discount or turn off the signal if there is an unnatural spike of traffic on a specific keyword, or if there is an unnatural spike of traffic heading to a particular website or web page.

And, since Google is tracking the behavior of end users on their own website, anomalous behavior is easier to track than it is tracking something across the broader web where signals are more indirect. Google can take advantage of their wide distribution of Chrome & Android where users are regularly logged into Google & pervasively tracked to place more weight on users where they had credit card data, a long account history with regular normal search behavior, heavy Gmail users, etc.

Plus there is a huge gap between the cost of traffic & the ability to monetize it. You might have to pay someone a dime or a quarter to search for something & there is no guarantee it will work on a sustainable basis even if you paid hundreds or thousands of people to do it. Any of those experimental searchers will have no lasting value unless they influence rank, but even if they do influence rankings it might only last temporarily. If you bought a bunch of traffic into something genuine Google searchers didn’t like then even if it started to rank better temporarily the rankings would quickly fall back if the real end user searchers disliked the site relative to other sites which already rank.

This is part of the reason why so many SEO blogs mention brand, brand, brand. If people are specifically looking for you in volume & Google can see that thousands or millions of people specifically want to access your site then that can impact how you rank elsewhere.

Even looking at something inside the search results for a while (dwell time) or quickly skipping over it to have a deeper scroll depth can be a ranking signal. Some Google patents mention how they can use mouse pointer location on desktop or scroll data from the viewport on mobile devices as a quality signal.

Neural Matching

Last year Danny Sullivan mentioned how Google rolled out neural matching to better understand the intent behind a search query.

The above Tweets capture what the neural matching technology intends to do. Google also stated:

we’ve now reached the point where neural networks can help us take a major leap forward from understanding words to understanding concepts. Neural embeddings, an approach developed in the field of neural networks, allow us to transform words to fuzzier representations of the underlying concepts, and then match the concepts in the query with the concepts in the document. We call this technique neural matching.

To help people understand the difference between neural matching & RankBrain, Google told SEL: “RankBrain helps Google better relate pages to concepts. Neural matching helps Google better relate words to searches.”

There are a couple research papers on neural matching.

The first one was titled A Deep Relevance Matching Model for Ad-hoc Retrieval. It mentioned using Word2vec & here are a few quotes from the research paper

  • “Successful relevance matching requires proper handling of the exact matching signals, query term importance, and diverse matching requirements.”
  • “the interaction-focused model, which first builds local level interactions (i.e., local matching signals) between two pieces of text, and then uses deep neural networks to learn hierarchical interaction patterns for matching.”
  • “according to the diverse matching requirement, relevance matching is not position related since it could happen in any position in a long document.”
  • “Most NLP tasks concern semantic matching, i.e., identifying the semantic meaning and infer”ring the semantic relations between two pieces of text, while the ad-hoc retrieval task is mainly about relevance matching, i.e., identifying whether a document is relevant to a given query.”
  • “Since the ad-hoc retrieval task is fundamentally a ranking problem, we employ a pairwise ranking loss such as hinge loss to train our deep relevance matching model.”

The paper mentions how semantic matching falls down when compared against relevancy matching because:

  • semantic matching relies on similarity matching signals (some words or phrases with the same meaning might be semantically distant), compositional meanings (matching sentences more than meaning) & a global matching requirement (comparing things in their entirety instead of looking at the best matching part of a longer document); whereas,
  • relevance matching can put significant weight on exact matching signals (weighting an exact match higher than a near match), adjust weighting on query term importance (one word might or phrase in a search query might have a far higher discrimination value & might deserve far more weight than the next) & leverage diverse matching requirements (allowing relevancy matching to happen in any part of a longer document)

Here are a couple images from the above research paper

And then the second research paper is

Deep Relevancy Ranking Using Enhanced Dcoument-Query Interactions
“interaction-based models are less efficient, since one cannot index a document representation independently of the query. This is less important, though, when relevancy ranking methods rerank the top documents returned by a conventional IR engine, which is the scenario we consider here.”

That same sort of re-ranking concept is being better understood across the industry. There are ranking signals that earn some base level ranking, and then results get re-ranked based on other factors like how well a result matches the user intent.

Here are a couple images from the above research paper.

For those who hate the idea of reading research papers or patent applications, Martinibuster also wrote about the technology here. About the only part of his post I would debate is this one:

“Does this mean publishers should use more synonyms? Adding synonyms has always seemed to me to be a variation of keyword spamming. I have always considered it a naive suggestion. The purpose of Google understanding synonyms is simply to understand the context and meaning of a page. Communicating clearly and consistently is, in my opinion, more important than spamming a page with keywords and synonyms.”

I think one should always consider user experience over other factors, however a person could still use variations throughout the copy & pick up a bit more traffic without coming across as spammy. Danny Sullivan mentioned the super synonym concept was impacting 30% of search queries, so there are still a lot which may only be available to those who use a specific phrase on their page.

Martinibuster also wrote another blog post tying more research papers & patents to the above. You could probably spend a month reading all the related patents & research papers.

The above sort of language modeling & end user click feedback compliment links-based ranking signals in a way that makes it much harder to luck one’s way into any form of success by being a terrible speller or just bombing away at link manipulation without much concern toward any other aspect of the user experience or market you operate in.

Pre-penalized Shortcuts

Google was even issued a patent for predicting site quality based upon the N-grams used on the site & comparing those against the N-grams used on other established site where quality has already been scored via other methods: “The phrase model can be used to predict a site quality score for a new site; in particular, this can be done in the absence of other information. The goal is to predict a score that is comparable to the baseline site quality scores of the previously-scored sites.”

Have you considered using a PLR package to generate the shell of your site’s content? Good luck with that as some sites trying that shortcut might be pre-penalized from birth.

Navigating the Maze

When I started in SEO one of my friends had a dad who is vastly smarter than I am. He advised me that Google engineers were smarter, had more capital, had more exposure, had more data, etc etc etc … and thus SEO was ultimately going to be a malinvestment.

Back then he was at least partially wrong because influencing search was so easy.

But in the current market, 16 years later, we are near the infection point where he would finally be right.

At some point the shortcuts stop working & it makes sense to try a different approach.

The flip side of all the above changes is as the algorithms have become more complex they have went from being a headwind to people ignorant about SEO to being a tailwind to those who do not focus excessively on SEO in isolation.

If one is a dominant voice in a particular market, if they break industry news, if they have key exclusives, if they spot & name the industry trends, if their site becomes a must read & is what amounts to a habit … then they perhaps become viewed as an entity. Entity-related signals help them & those signals that are working against the people who might have lucked into a bit of success become a tailwind rather than a headwind.

If your work defines your industry, then any efforts to model entities, user behavior or the language of your industry are going to boost your work on a relative basis.

This requires sites to publish frequently enough to be a habit, or publish highly differentiated content which is strong enough that it is worth the wait.

Those which publish frequently without being particularly differentiated are almost guaranteed to eventually walk into a penalty of some sort. And each additional person who reads marginal, undifferentiated content (particularly if it has an ad-heavy layout) is one additional visitor that site is closer to eventually getting whacked. Success becomes self regulating. Any short-term success becomes self defeating if one has a highly opportunistic short-term focus.

Those who write content that only they could write are more likely to have sustained success.

SEO Book

Posted in Latest NewsComments Off

The One-Hour Guide to SEO: Keyword Targeting & On-Page Optimization – Whiteboard Friday

Posted by randfish

We’ve covered strategy, keyword research, and how to satisfy searcher intent — now it’s time to tackle optimizing the webpage itself! In the fourth part of the One-Hour Guide to SEO, Rand offers up an on-page SEO checklist to start you off on your way towards perfectly optimized and keyword-targeted pages.

If you missed them, check out the other episodes in the series so far:

A picture of the whiteboard. The content is all detailed within the transcript below.

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans. Welcome to another edition of our special One-Hour Guide to SEO. We are now on Part IV – Keyword Targeting and On-Page Optimization. So hopefully, you’ve watched Part III, where we talked about searcher satisfaction, how to make sure searchers are happy with the page content that you create and the user experience that you build for them, as well as Part II, where we talked about keyword research and how to make sure that you are targeting the right words and phrases that searchers are actually looking for, that you think you can actually rank for, and that actually get real organic click-through rate, because Google’s zero-click searches are rising.

A depiction of a site with important on-page SEO elements highlighted, drawn on the whiteboard.

Now we’re into on-page SEO. So this is essentially taking the words and phrases that we know we want to rank for with the content that we know will help searchers accomplish their task. Now how do we make sure that the page is optimal for ranking in Google?

On-page SEO has evolved

Well, this is very different from the way it was years ago. A long time ago, and unfortunately many people still believe this to be true about SEO, it was: How do I stuff my keywords into all the right tags and places on the page? How do I take advantage of things like the meta keywords tag, which hasn’t been used in a decade, maybe two? How do I take advantage of putting all the words and phrases stuffed into my title, my URL, my description, my headline, my H2 through H7 tags, all these kinds of things?

Most of that does not matter, but some of it still does. Some of it is still important, and we need to run through what those are so that you give yourself the best possible chance for ranking.

The on-page SEO checklist

So what I’ve done here is created a sort of brief, on-page SEO checklist. This is not comprehensive, especially on the technical portion, because we’re saving that for Part V, the technical SEO section, which we will get into, of this Guide. In this checklist, some of the most important things are on here. 

☑ Descriptive, compelling, keyword-rich title element

Many of the most important things are on here, and those include things like a descriptive, compelling, keyword-rich but not stuffed title element, also called the page title or a title tag. So, for example, if I am a tool website, like toolsource.com — I made that domain name up, I assume it’s registered to somebody — and I want to rank for the “best online survey tools,” well, “The Best Online Survey Tools for 2019″ is a great title tag, and it’s very different from best online survey tools, best online survey software, best online survey software 2019. You’ve seen title tags like that. You’ve seen pages that contain stuff like that. That is no longer good SEO practices.

So we want that descriptive, compelling, makes me want to click. Remember that this title is also going to show up in the search results as the title of the snippet that your website appears in.

☑ Meta description designed to draw the click

Second, a meta description. This is still used by search engines, not for rankings though. Sort of think of it like ad text. You are drawing a click, or you’re attempting to draw the click. So what you want to do is have a description that tells people what’s on the page and inspires them, incites them, makes them want to click on your result instead of somebody else’s. That’s your chance to say, “Here’s why we’re valuable and useful.”

☑ Easy-to-read, sensible, short URL

An easy-to-read, sensible, short URL. For example, toolsource.com/reviews/best-online-surveys-2019. Perfect, very legible, very readable. I see that in the results, I think, “Okay, I know what that page is going to be.” I see that copied and pasted somewhere on the web, I think, “I know what’s going to be at that URL. That looks relevant to me.”

Or reviews.best-online-tools.info. Okay, well, first off, that’s a freaking terrible domain name. /oldseqs?ide=17 bunch of weird letters and tab detail equals this, and UTM parameter equals that. I don’t know what this is. I don’t know what all this means. By the way, having more than one or two URL parameters is very poorly correlated with and not recommended for trying to rank in search results. So you want to try and rewrite these to be more friendly, shorter, more sensible, and readable by a human being. That will help Google as well.

☑ First paragraph optimized for appearing in featured snippets

That first paragraph, the first paragraph of the content or the first few words of the page should be optimized for appearing in what Google calls featured snippets. Now, featured snippets is when I perform a search, for many queries, I don’t just see a list of pages. Sometimes I’ll see this box, often with an image and a bunch of descriptive text that’s drawn from the page, often from the first paragraph or two. So if you want to get that featured snippet, you have to be able to rank on page one, and you need to be optimized to answer the query right in your first paragraph. But this is an opportunity for you to be ranking in position three or four or five, but still have the featured snippet answer above all the other results. Awesome when you can do this in SEO, very, very powerful thing. Featured snippet optimization, there’s a bunch of resources on Moz’s website that we can point you to there too.

☑ Use the keyword target intelligently in…

☑ The headline

So if I’m trying to rank for “best online survey tools,” I would try and use that in my headline. Generally speaking, I like to have the headline and the title of the piece nearly the same or exactly the same so that when someone clicks on that title, they get the same headline on the page and they don’t get this cognitive dissonance between the two.

☑ The first paragraph

The first paragraph, we talked about. 

☑ The page content

The page’s content, you don’t want to have a page that’s talking about best online survey tools and you never mention online surveys. That would be a little weird. 

☑ Internal link anchors

An internal link anchor. So if other places on your website talk about online survey tools, you should be linking to this page. This is helpful for Google finding it, helpful for visitors finding it, and helpful to say this is the page that is about this on our website.

A whiteboard drawing depicting how to target one page with multiple keywords vs multiple pages targeting single keywords.

I do strongly recommend taking the following advice, which is we are no longer in a world where it makes sense to target one keyword per page. For example, best online survey tools, best online survey software, and best online survey tools 2019 are technically three unique keyword phrases. They have different search volumes. Slightly different results will show up for each of them. But it is no longer the case, whereas it was maybe a decade ago, that I would go create a page for each one of those separate things.

Instead, because these all share the same searcher intent, I want to go with one page, just a single URL that targets all the keywords that share the exact same searcher intent. If searchers are looking to find exactly the same thing but with slightly modified or slight variations in how they phrase things, you should have a page that serves all of those keywords with that same searcher intent rather than multiple pages that try to break those up, for a bunch of reasons. One, it’s really hard to get links to all those different pages. Getting links just period is very challenging, and you need them to rank.

Second off, the difference between those is going to be very, very subtle, and it will be awkward and seem to Google very awkward that you have these slight variations with almost the same thing. It might even look to them like duplicate or very similar or low-quality content, which can get you down-ranked. So stick to one page per set of shared intent keywords.

☑ Leverage appropriate rich snippet options

Next, you want to leverage appropriate rich snippet options. So, for example, if you are in the recipes space, you can use a schema markup for recipes to show Google that you’ve got a picture of the recipe and a cooking time and all these different details. Google offers this in a wide variety of places. When you’re doing reviews, they offer you the star ratings. Schema.org has a full list of these, and Google’s rich snippets markup page offers a bunch more. So we’ll point you to both of those as well.

☑ Images on the page employ…

Last, but certainly not least, because image search is such a huge portion of where Google’s search traffic comes from and goes to, it is very wise to optimize the images on the page. Image search traffic can now send significant traffic to you, and optimizing for images can sometimes mean that other people will find your images through Google images and then take them, put them on their own website and link back to you, which solves a huge problem. Getting links is very hard. Images is a great way to do it.

☑ Descriptive, keyword-rich filenames

The images on your page should employ descriptive, keyword-rich filenames, meaning if I have one for typeform, I don’t want it to be pick one, two or three. I want it to be typeformlogo or typeformsurveysoftware as the name of the file.

☑ Descriptive alt attributes

The alt attribute or alt tag is part of how you describe that for screen readers and other accessibility-focused devices, and Google also uses that text too. 

☑ Caption text (if appropriate)

Caption text, if that’s appropriate, if you have like a photograph and a caption describing it, you want to be descriptive of what’s actually in the picture.

☑ Stored in same domain and subdomain

These files, in order to perform well, they generally need to be hosted on the same domain and subdomain. If, for example, all your images are stored on an Amazon Web Services domain and you don’t bother rewriting or making sure that the domain looks like it’s on toolsource.com/photos or /images here, that can cause real ranking problems. Oftentimes you won’t perform at all in Google images because they don’t associate the image with the same domain. Same subdomain as well is preferable.

If you do all these things and you nail searcher intent and you’ve got your keyword research, you are ready to move on to technical SEO and link building and then start ranking. So we’ll see you for that next edition next week. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

The One-Hour Guide to SEO: Keyword Research – Whiteboard Friday

Posted by randfish

Before doing any SEO work, it’s important to get a handle on your keyword research. Aside from helping to inform your strategy and structure your content, you’ll get to know the needs of your searchers, the search demand landscape of the SERPs, and what kind of competition you’re up against.

In the second part of the One-Hour Guide to SEO, the inimitable Rand Fishkin covers what you need to know about the keyword research process, from understanding its goals to building your own keyword universe map. Enjoy!


Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans. Welcome to another portion of our special edition of Whiteboard Friday, the One-Hour Guide to SEO. This is Part II – Keyword Research. Hopefully you’ve already seen our SEO strategy session from last week. What we want to do in keyword research is talk about why keyword research is required. Why do I have to do this task prior to doing any SEO work?

The answer is fairly simple. If you don’t know which words and phrases people type into Google or YouTube or Amazon or Bing, whatever search engine you’re optimizing for, you’re not going to be able to know how to structure your content. You won’t be able to get into the searcher’s brain, into their head to imagine and empathize with them what they actually want from your content. You probably won’t do correct targeting, which will mean your competitors, who are doing keyword research, are choosing wise search phrases, wise words and terms and phrases that searchers are actually looking for, and you might be unfortunately optimizing for words and phrases that no one is actually looking for or not as many people are looking for or that are much more difficult than what you can actually rank for.

The goals of keyword research

So let’s talk about some of the big-picture goals of keyword research. 

Understand the search demand landscape so you can craft more optimal SEO strategies

First off, we are trying to understand the search demand landscape so we can craft better SEO strategies. Let me just paint a picture for you.

I was helping a startup here in Seattle, Washington, a number of years ago — this was probably a couple of years ago — called Crowd Cow. Crowd Cow is an awesome company. They basically will deliver beef from small ranchers and small farms straight to your doorstep. I personally am a big fan of steak, and I don’t really love the quality of the stuff that I can get from the store. I don’t love the mass-produced sort of industry around beef. I think there are a lot of Americans who feel that way. So working with small ranchers directly, where they’re sending it straight from their farms, is kind of an awesome thing.

But when we looked at the SEO picture for Crowd Cow, for this company, what we saw was that there was more search demand for competitors of theirs, people like Omaha Steaks, which you might have heard of. There was more search demand for them than there was for “buy steak online,” “buy beef online,” and “buy rib eye online.” Even things like just “shop for steak” or “steak online,” these broad keyword phrases, the branded terms of their competition had more search demand than all of the specific keywords, the unbranded generic keywords put together.

That is a very different picture from a world like “soccer jerseys,” where I spent a little bit of keyword research time today looking, and basically the brand names in that field do not have nearly as much search volume as the generic terms for soccer jerseys and custom soccer jerseys and football clubs’ particular jerseys. Those generic terms have much more volume, which is a totally different kind of SEO that you’re doing. One is very, “Oh, we need to build our brand. We need to go out into this marketplace and create demand.” The other one is, “Hey, we need to serve existing demand already.”

So you’ve got to understand your search demand landscape so that you can present to your executive team and your marketing team or your client or whoever it is, hey, this is what the search demand landscape looks like, and here’s what we can actually do for you. Here’s how much demand there is. Here’s what we can serve today versus we need to grow our brand.

Create a list of terms and phrases that match your marketing goals and are achievable in rankings

The next goal of keyword research, we want to create a list of terms and phrases that we can then use to match our marketing goals and achieve rankings. We want to make sure that the rankings that we promise, the keywords that we say we’re going to try and rank for actually have real demand and we can actually optimize for them and potentially rank for them. Or in the case where that’s not true, they’re too difficult or they’re too hard to rank for. Or organic results don’t really show up in those types of searches, and we should go after paid or maps or images or videos or some other type of search result.

Prioritize keyword investments so you do the most important, high-ROI work first

We also want to prioritize those keyword investments so we’re doing the most important work, the highest ROI work in our SEO universe first. There’s no point spending hours and months going after a bunch of keywords that if we had just chosen these other ones, we could have achieved much better results in a shorter period of time.

Match keywords to pages on your site to find the gaps

Finally, we want to take all the keywords that matter to us and match them to the pages on our site. If we don’t have matches, we need to create that content. If we do have matches but they are suboptimal, not doing a great job of answering that searcher’s query, well, we need to do that work as well. If we have a page that matches but we haven’t done our keyword optimization, which we’ll talk a little bit more about in a future video, we’ve got to do that too.

Understand the different varieties of search results

So an important part of understanding how search engines work — we’re going to start down here and then we’ll come back up — is to have this understanding that when you perform a query on a mobile device or a desktop device, Google shows you a vast variety of results. Ten or fifteen years ago this was not the case. We searched 15 years ago for “soccer jerseys,” what did we get? Ten blue links. I think, unfortunately, in the minds of many search marketers and many people who are unfamiliar with SEO, they still think of it that way. How do I rank number one? The answer is, well, there are a lot of things “number one” can mean today, and we need to be careful about what we’re optimizing for.

So if I search for “soccer jersey,” I get these shopping results from Macy’s and soccer.com and all these other places. Google sort has this sliding box of sponsored shopping results. Then they’ve got advertisements below that, notated with this tiny green ad box. Then below that, there are couple of organic results, what we would call classic SEO, 10 blue links-style organic results. There are two of those. Then there’s a box of maps results that show me local soccer stores in my region, which is a totally different kind of optimization, local SEO. So you need to make sure that you understand and that you can convey that understanding to everyone on your team that these different kinds of results mean different types of SEO.

Now I’ve done some work recently over the last few years with a company called Jumpshot. They collect clickstream data from millions of browsers around the world and millions of browsers here in the United States. So they are able to provide some broad overview numbers collectively across the billions of searches that are performed on Google every day in the United States.

Click-through rates differ between mobile and desktop

The click-through rates look something like this. For mobile devices, on average, paid results get 8.7% of all clicks, organic results get about 40%, a little under 40% of all clicks, and zero-click searches, where a searcher performs a query but doesn’t click anything, Google essentially either answers the results in there or the searcher is so unhappy with the potential results that they don’t bother taking anything, that is 62%. So the vast majority of searches on mobile are no-click searches.

On desktop, it’s a very different story. It’s sort of inverted. So paid is 5.6%. I think people are a little savvier about which result they should be clicking on desktop. Organic is 65%, so much, much higher than mobile. Zero-click searches is 34%, so considerably lower.

There are a lot more clicks happening on a desktop device. That being said, right now we think it’s around 60–40, meaning 60% of queries on Google, at least, happen on mobile and 40% happen on desktop, somewhere in those ranges. It might be a little higher or a little lower.

The search demand curve

Another important and critical thing to understand about the keyword research universe and how we do keyword research is that there’s a sort of search demand curve. So for any given universe of keywords, there is essentially a small number, maybe a few to a few dozen keywords that have millions or hundreds of thousands of searches every month. Something like “soccer” or “Seattle Sounders,” those have tens or hundreds of thousands, even millions of searches every month in the United States.

But people searching for “Sounders FC away jersey customizable,” there are very, very few searches per month, but there are millions, even billions of keywords like this. 

The long-tail: millions of keyword terms and phrases, low number of monthly searches

When Sundar Pichai, Google’s current CEO, was testifying before Congress just a few months ago, he told Congress that around 20% of all searches that Google receives each day they have never seen before. No one has ever performed them in the history of the search engines. I think maybe that number is closer to 18%. But that is just a remarkable sum, and it tells you about what we call the long tail of search demand, essentially tons and tons of keywords, millions or billions of keywords that are only searched for 1 time per month, 5 times per month, 10 times per month.

The chunky middle: thousands or tens of thousands of keywords with ~50–100 searches per month

If you want to get into this next layer, what we call the chunky middle in the SEO world, this is where there are thousands or tens of thousands of keywords potentially in your universe, but they only have between say 50 and a few hundred searches per month.

The fat head: a very few keywords with hundreds of thousands or millions of searches

Then this fat head has only a few keywords. There’s only one keyword like “soccer” or “soccer jersey,” which is actually probably more like the chunky middle, but it has hundreds of thousands or millions of searches. The fat head is higher competition and broader intent.

Searcher intent and keyword competition

What do I mean by broader intent? That means when someone performs a search for “soccer,” you don’t know what they’re looking for. The likelihood that they want a customizable soccer jersey right that moment is very, very small. They’re probably looking for something much broader, and it’s hard to know exactly their intent.

However, as you drift down into the chunky middle and into the long tail, where there are more keywords but fewer searches for each keyword, your competition gets much lower. There are fewer people trying to compete and rank for those, because they don’t know to optimize for them, and there’s more specific intent. “Customizable Sounders FC away jersey” is very clear. I know exactly what I want. I want to order a customizable jersey from the Seattle Sounders away, the particular colors that the away jersey has, and I want to be able to put my logo on there or my name on the back of it, what have you. So super specific intent.

Build a map of your own keyword universe

As a result, you need to figure out what the map of your universe looks like so that you can present that, and you need to be able to build a list that looks something like this. You should at the end of the keyword research process — we featured a screenshot from Moz’s Keyword Explorer, which is a tool that I really like to use and I find super helpful whenever I’m helping companies, even now that I have left Moz and been gone for a year, I still sort of use Keyword Explorer because the volume data is so good and it puts all the stuff together. However, there are two or three other tools that a lot of people like, one from Ahrefs, which I think also has the name Keyword Explorer, and one from SEMrush, which I like although some of the volume numbers, at least in the United States, are not as good as what I might hope for. There are a number of other tools that you could check out as well. A lot of people like Google Trends, which is totally free and interesting for some of that broad volume data.



So I might have terms like “soccer jersey,” “Sounders FC jersey”, and “custom soccer jersey Seattle Sounders.” Then I’ll have these columns: 

  • Volume, because I want to know how many people search for it; 
  • Difficulty, how hard will it be to rank. If it’s super difficult to rank and I have a brand-new website and I don’t have a lot of authority, well, maybe I should target some of these other ones first that are lower difficulty. 
  • Organic Click-through Rate, just like we talked about back here, there are different levels of click-through rate, and the tools, at least Moz’s Keyword Explorer tool uses Jumpshot data on a per keyword basis to estimate what percent of people are going to click the organic results. Should you optimize for it? Well, if the click-through rate is only 60%, pretend that instead of 100 searches, this only has 60 or 60 available searches for your organic clicks. Ninety-five percent, though, great, awesome. All four of those monthly searches are available to you.
  • Business Value, how useful is this to your business? 
  • Then set some type of priority to determine. So I might look at this list and say, “Hey, for my new soccer jersey website, this is the most important keyword. I want to go after “custom soccer jersey” for each team in the U.S., and then I’ll go after team jersey, and then I’ll go after “customizable away jerseys.” Then maybe I’ll go after “soccer jerseys,” because it’s just so competitive and so difficult to rank for. There’s a lot of volume, but the search intent is not as great. The business value to me is not as good, all those kinds of things.
  • Last, but not least, I want to know the types of searches that appear — organic, paid. Do images show up? Does shopping show up? Does video show up? Do maps results show up? If those other types of search results, like we talked about here, show up in there, I can do SEO to appear in those places too. That could yield, in certain keyword universes, a strategy that is very image centric or very video centric, which means I’ve got to do a lot of work on YouTube, or very map centric, which means I’ve got to do a lot of local SEO, or other kinds like this.

Once you build a keyword research list like this, you can begin the prioritization process and the true work of creating pages, mapping the pages you already have to the keywords that you’ve got, and optimizing in order to rank. We’ll talk about that in Part III next week. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Make sense of your data with these essential keyword segments

Posted by TheMozTeam

This blog post was originally published on the STAT blog.


The first step to getting the most out of your SERP data is smart keyword segmentation — it surfaces targeted insights that will help you make data-driven decisions.

But knowing what to segment can feel daunting, especially when you’re working with thousands of keywords. That’s why we’re arming you with a handful of must-have tags.

Follow along as we walk through the different kinds of segments in STAT, how to create them, and which tags you’ll want to get started with. You’ll be a fanciful segment connoisseur by the time we’re through!

Segmentation in STAT

In STAT, keyword segments are called “tags” and come as two different types: standard or dynamic.

Standard tags are best used when you want to keep specific keywords grouped together because of shared characteristics — like term (brand, product type, etc), location, or device. Standard tags are static, so the keywords that populate those segments won’t change unless you manually add or remove them.

Dynamic tags, on the other hand, are a fancier kind of tag based on filter criteria. Just like a smart playlist, dynamic tags automatically populate with all of the keywords that meet said criteria, such as keywords with a search volume over 500 that rank on page one. This means that the keywords in a dynamic tag aren’t forever — they’ll filter in and out depending on the criteria you’ve set.

How to create a keyword segment

Tags are created in a few easy steps. At the Site level, pop over to the Keywords tab, click the down arrow on any table column header, and then select Filter keywords. From there, you can select the pre-populated options or enter your own metrics for a choose-your-own-filter adventure.

Once your filters are in place, simply click Tag All Filtered Keywords, enter a new tag name, and then pick the tag type best suited to your needs — standard or dynamic — and voila! You’ve created your very own segment.

Segments to get you started

Now that you know how to set up a tag, it’s time to explore some of the different segments you can implement and the filter criteria you’ll need to apply.

Rank and rank movement

Tracking your rank and ranking movements with dynamic tags will give you eyeballs on your keyword performance, making it easy to monitor and report on current and historical trends.

There’s a boatload of rank segments you can set up, but here’s just a sampling to get you started:

  • Keywords ranking in position 1–3; this will identify your top performing keywords.
  • Keywords ranking in position 11–15; this will suss out the low-hanging, top of page two fruit in need of a little nudge.
  • Keywords with a rank change of 10 or more (in either direction); this will show you keywords that are slipping off or shooting up the SERP.

Appearance and ownership of SERP features

Whether they’re images, carousels, or news results, SERP features have significantly altered the search landscape. Sometimes they push you down the page and other times, like when you manage to snag one, they can give you a serious leg up on the competition and drive loads more traffic to your site.

Whatever industry-related SERP features that you want to keep apprised of, you can create dynamic tags that show you the prevalence and movement of them within your keyword set. Segment even further for tags that show which keywords own those features and which have fallen short.

Below are a few segments you can set up for featured snippets and local packs.

Featured snippets

Everyone’s favourite SERP feature isn’t going anywhere anytime soon, so it wouldn’t be a bad idea to outfit yourself with a snippet tracking strategy. You can create as many tags as there are snippet options to choose from:

  • Keywords with a featured snippet.
  • Keywords with a paragraph, list, table, and/or carousel snippet.
  • Keywords with an owned paragraph, list, table, and/or carousel snippet.
  • Keywords with an unowned paragraph, list, table, and/or carousel snippet.

The first two will allow you to see over-arching snippet trends, while the last two will chart your ownership progress.

If you want to know the URL that’s won you a snippet, just take a peek at the URL column.

Local packs

If you’re a brick and mortar business, we highly advise creating tags for local packs since they provide a huge opportunity for exposure. These two tags will show you which local packs you have a presence in and which you need to work on

  • Keywords with an owned local pack.
  • Keywords with an unowned local pack.

Want all the juicy data squeezed into a local pack, like who’s showing up and with what URL? We created the Local pack report just for that.

Landing pages, subdomains, and other important URLs

Whether you’re adding new content or implementing link-building strategies around subdomains and landing pages, dynamic tags allow you to track and measure page performance, see whether your searchers are ending up on the pages you want, and match increases in page traffic with specific keywords.

For example, are your informational intent keywords driving traffic to your product pages instead of your blog? To check, a tag that includes your blog URL will pull in each post that ranks for one of your keywords.

Try these three dynamic tags for starters:

  • Keywords ranking for a landing page URL.
  • Keywords ranking for a subdomain URL.
  • Keywords ranking for a blog URL.

Is a page not indexed yet? That’s okay. You can still create a dynamic tag for its URL and keywords will start appearing in that segment when Google finally gets to it.

Location, location, location

Google cares a lot about location and so should you, which is why keyword segments centred around location are essential. You can tag in two ways: by geo-modifier and by geo-location.

For these, it’s better to go with the standard tag as the search term and location are fixed to the keyword.

Geo-modifier

A geo-modifier is the geographical qualifier that searchers manually include in their query — like in [sushi near me]. We advocate for adding various geo-modifiers to your keywords and then incorporating them into your tagging strategy. For instance, you can segment by:

  • Keywords with “in [city]” in them.
  • Keywords with “near me” in them.

The former will show you how you fare for city-wide searches, while the latter will let you see if you’re meeting the needs of searchers looking for nearby options.


Geo-location

Geo-location is where the keyword is being tracked. More tracked locations mean more searchers’ SERPs to sample. And the closer you can get to searchers standing on a street corner, the more accurate those SERPs will be. This is why we strongly recommend you track in multiple pin-point locations in every market you serve.

Once you’ve got your tracking strategy in place, get your segmentation on. You can filter and tag by:

  • Keywords tracked in specific locations; this will let you keep tabs on geographical trends.
  • Keywords tracked in each market; this will allow for market-level research.

Search volume & cost-per-click

Search volume might be a contentious metric thanks to Google’s close variants, but having a decent idea of what it’s up to is better than a complete shot in the dark. We suggest at least two dynamic segments around search volume:

  • Keywords with high search volume; this will show which queries are popular in your industry and have the potential to drive the most traffic.
  • Keywords with low search volume; this can actually help reveal conversion opportunities — remember, long-tail keywords typically have lower search volumes but higher conversion rates.

Tracking the cost-per-click of your keywords will also bring you and your PPC team tonnes of valuable insights — you’ll know if you’re holding the top organic spot for an outrageously high CPC keyword.

As with search volume, tags for high and low CPC should do you just fine. High CPC keywords will show you where the competition is the fiercest, while low CPC keywords will surface your easiest point of entry into the paid game — queries you can optimize for with less of a fight.

Device type

From screen size to indexing, desktop and smartphones produce substantially different SERPs from one another, making it essential to track them separately. So, filter and tag for:

  • Keywords tracked on a desktop.
  • Keywords tracked on a smartphone.

Similar to your location segments, it’s best to use the standard tag here.

Go crazy with multiple filters

We’ve shown you some really high-level segments, but you can actually filter down your keywords even further. In other words, you can get extra fancy and add multiple filters to a single tag. Go as far as high search volume, branded keywords triggering paragraph featured snippets that you own for smartphone searchers in the downtown core. Phew!

Want to make talk shop about segmentation or see dynamic tags in action? Say hello (don’t be shy) and request a demo.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

How to Identify and Tackle Keyword Cannibalization in 2019

Posted by SamuelMangialavori

If you read the title of this blog and somehow, even only for a second, thought about the iconic movie “The Silence of the Lambs”, welcome to the club — you are not alone!

Despite the fact that the term “cannibalization” does not sound very suitable for digital marketing, this core concept has been around for a long time. This term simply identifies the issue of having multiple pages competing for the same (or very similar) keywords/keyword clusters, hence the cannibalization.

What do we mean by cannibalization in SEO?

This unfortunate and often unnoticed problem harms the SEO potential of the pages involved. When more than one page has the same/similar keyword target, it creates “confusion” in the eyes of the search engine, resulting in a struggle to decide what page to rank for what term.

For instance, say my imaginary e-commerce website sells shoes online and I have created a dedicated category page that targets the term ‘ankle boots’: www.distilledshoes.com/boots/ankle-boots/

Knowing the importance of editorial content, over time I decide to create two blog posts that cover topics related to ankle boots off the back of a keyword research: one post on how to wear ankle boots and another about the top 10 ways to wear ankle boots in 2019:

One month later, I realize that some of my blog pages are actually ranking for a few key terms that my e-commerce category page was initially visible for.

Now the question is: is this good or bad for my website?

Drum roll, please…and the answer is — It depends on the situation, the exact keywords, and the intent of the user when searching for a particular term.

Keyword cannibalization is not black or white — there are multiple grey areas and we will try and go though several scenarios in this blog post. I recommend you spend 5 minutes checking this awesome Whiteboard Friday which covers the topic of search intent extremely well.

How serious of a problem is keyword cannibalization?

Much more than what you might think — almost every website that I have worked on in the past few years have some degree of cannibalization that needs resolving. It is hard to estimate how much a single page might be held back by this issue, as it involves a group of pages whose potential is being limited. So, my suggestion is to treat this issue by analyzing clusters of pages that have some degree of cannibalization rather than single pages.

Where is most common to find cannibalization problems in SEO?

Normally, you can come across two main placements for cannibalization:

1) At meta data level:

When two or more pages have meta data (title tags and headings mainly) which target the same or very similar keywords, cannibalization occurs. This requires a less labour-intensive type of fix, as only meta data needs adjusting.

For example: my e-commerce site has three boots-related pages, which have the following meta data:

Page URL Title tag Header 1
/boots/all /Women’s Boots – Ankle & Chelsea Boots | Distilled Shoes Women’s Ankle & Chelsea Boots
/boots/ankle-boots/ Women’s Ankle Boots | Distilled Shoes Ankle Boots
boots/chelsea-boots/ Women’s Chelsea Boots | Distilled Shoes Chelsea Boots

These types of keyword cannibalization often occurs on e-commerce sites which have many category (or subcategory) pages with the intention to target specific keywords, such as the example above. Ideally, we would want to have a generic boots page to target generic boots related terms, while the other two pages should be focusing on the specific types of boots we are selling on those pages: ankle and chelsea.

Why not try the below instead?

Page URL New Title Tag New Header 1
/boots/all Women’s Boots – All Types of Winter Boots | Distilled Shoes Women’s Winter Boots
/boots/ankle-boots/ Women’s Ankle Boots | Distilled Shoes Ankle Boots
boots/chelsea-boots/ Women’s Chelsea Boots | Distilled Shoes Chelsea Boots

More often than not, we fail to differentiate our e-commerce site’s meta data to target the very specific subgroup of keywords that we should aim for — after all, this is the main point of having so many category pages, no? If interested in the topic, find here a blog post I wrote on the subject.

The fact that e-commerce pages tend to have very little text on them makes meta data very important, as it will be one of the main elements search engines look at to understand how a page differs from the other.

2) At page content level

When cannibalization occurs at page content level (meaning two or more pages tend to cover very similar topics in their body content), it normally needs more work than the above example, since it requires the webmaster to first find all the competing pages and then decide on the best approach to tackle the issue.

For example: say my e-commerce has two blog pages which cover the following topics:

Page URL Objective of the article
/blog/how-to-clean-leather-boots/ Suggests how to take care of leather boots so they last longer
/blog/boots-cleaning-guide-2019/ Shows a 121 guide on how to clean different types of boots

These types of keyword cannibalization typically occurs on editorial pages, or transactional pages provided with substantial amount of text.

It is fundamental to clarify something: SEO is often not the main driver when producing editorial content, as different teams are involved in producing content for social and engagement reasons, and fairly so. Especially in larger corporations, it is easy to underestimate how complex it is to find a balance between all departments and how easily things can be missed.

From a pure SEO standpoint, I can assure you that the two pages above are very likely to be subject to cannibalization. Despite the fact they have different editorial angles, they will probably display some degree of duplicated content between them (more on this later).

In the eyes of a search engine, how different are these two blog posts, both of which aim to address a fairly similar intent? That is the main question you should ask yourself when going through this task. My suggestion is the following: Before investing time and resources into creating new pages, make the effort to review your existing content.

What are the types of cannibalization in SEO?

Simply put, you could come across 2 main types:

1) Two or more landing pages on your website that are competing for the same keywords

For instance, it could be the case that, for the keyword “ankle boots”, two of my pages are ranking at the same time:

Page URL Title tag Ranking for the keyword “ankle boots”
Page A: /boots/all Women’s Boots – Ankle & Chelsea Boots | Distilled Shoes Position 8
Pabe B: /boots/ankle-boots/ Women’s Ankle Boots | Distilled Shoes Position 5

Is this a real cannibalization issue? The answer is both yes and no.

If multiple pages are ranking for the same term, it is because a search engine finds elements of both pages that they think respond to the query in some way — so technically speaking, they are potential ‘cannibals’.

Does it mean you need to panic and change everything on both pages? Surely not. It very much depends on the scenario and your objective.

Scenario 1

In the instances where both pages have really high rankings on the first page of the SERPS, this could work in your advantage: More space occupied means more traffic for your pages, so treat it as “good” cannibalization.

If this is the case, I recommend you do the following:

  • Consider changing the meta descriptions to make them more enticing and unique from each other. You do not want both pages to show the same message and fail to impress the user.
  • In case you realise that amongst the two pages, the “secondary/non-intended page” is ranking higher (for example: Page A /boots/all ranks higher than Page B /boots/ankle-boots/ for the term ‘ankle boots’), you should check on Google Search Console (GSC) to see which page is getting the most amount of clicks for that single term. Then, decide if it is worth altering other elements of your SEO to better address that particular keyword.

For instance, what would happen if I removed the term ankle boots from my /boots/all (Page A) title tag and page copy? If Google reacts by favouring my /boots/ankle-boots/ page instead (Page B), which may gain higher positions, then great! If not, the worst case scenario is you can revert the changes back and keep enjoying the two results on page one of the SERP.

Page URL Title tag Ranking for the keyword “ankle boots”
Page A: /boots/all Women’s Boots – Chelsea Boots & many more types | Distilled Shoes Test and decide

Scenario 2

In the instances where page A has high rankings page one of the SERPS and page B is nowhere to be seen (beyond the top 15–20 results), it is up to you to decide if this minor cannibalization is worth your time and resources, as this may not be an urgency.

If you decide that it is worth pursuing, I recommend you do the following:

  • Keep monitoring the keywords for which the two pages seem to show, in case Google might react differently in the future.
  • Come back to this minor cannibalization point after you have addressed your most important issues.

Scenario 3

In the instances where both pages are ranking in page two or three of the SERP, then it might be the case that your cannibalization is holding one or both of them back.

If this is the case, I recommend you do the following:

  • Check on GSC to see which of your pages is getting the most amount of clicks for that single keyword. You should also check on similar terms, since keywords on page two or three of the SERP will show very low clicks in GSC. Then, decide which page should be your primary focus — the one that is better suited from a content perspective — and be open to test changes for on-page SEO elements of both pages.
  • Review your title tags, headings, and page copies and try to find instances where both pages seem to overlap. If the degree of duplication between them is really high, it might be worth consolidating/canonicalising/redirecting one to the other (I’ll touch on this below).

2) Two or more landing pages on your website that are flip-flopping for the same keyword

It could be the case that, for instance, the keyword “ankle boots” for two of my pages are ranking at different times, as Google seems to have a difficult time deciding which page to choose for the term.

Page URL Ranking for the keyword “ankle boots” on 1st of January Ranking for the keyword “ankle boots” on 5th of January
Page A: /boots/all Position 6 Not ranking
Pabe B: /boots/ankle-boots/ Not ranking Position 8

If this happens to you, try and find an answer to the following questions:This is a common issue that I am sure many of you have encountered, in which landing pages seem to be very volatile and rank for a group of keywords in a non-consistent manner.

When did this flip-flopping start?

Pinpointing the right moment in time where this all began might help you understand how the problem originated in the first place. Maybe a canonical tag occurred or went missing, maybe some changes to your on-page elements or an algorithm update mixed things up?

How many pages flip-flop between each other for the same keyword?

The fewer pages subject to volatility, the better and easier to address. Try to identify which pages are involved and inspect all elements that might have triggered this instability.

How often do these pages flip-flop?

Try and find out how often the ranking page for a keyword has changed: the fewer times, the better. Cross reference the time of the changes with your knowledge of the site in case it might have been caused by other adjustments.

If the flip-flop has occurred only once and seems to have stopped, there is probably nothing to worry about, as it’s likely a one-off volatility in the SERP. At the end of the day, we need to remember that Google runs test and changes almost everyday.

How to identify which pages are victims of cannibalization

I will explain what tools I normally use to detect major cannibalization fluxes, but I am sure there are several ways to reach the same results — if you want to share your tips, please do comment below!

Tools to deploy for type 1 of cannibalization: When two of more landing pages are competing for the same keyword

I know we all love tools that help you speed up long tasks, and one of my favorites is Ahrefs. I recommend using their fantastic method which will find your ‘cannibals’ in minutes.

Watch their five minute video here to see how to do it.

I am certain SEMrush, SEOMonitor, and other similar tools offer the same ability to retrieve that kind of data, maybe just not as fast as Ahrefs’ method listed above. If you do not have any tools at your disposal, Google Search Console and Google Sheets will be your friends, but it will be more of a manual process.

Tools to deploy for Type 2 of cannibalization: When two or more landing pages are flip-flopping for the same keyword

Ideally, most rank tracking tools will be able to do this functionally discover when a keyword has changed ranking URL over time. Back in the day I used tracking tools like Linkdex and Pi Datametrics to do just this.

At Distilled, we use STAT, which displays this data under History, within the main Keyword tab — see screenshot below as example.

One caveat of these kinds of ranking tools is that this data is often accessible only by keyword and will require data analysis. This means it may take a bit of time to check all keywords involved in this cannibalization, but the insights you’ll glean are well worth the effort.

Google Data Studio Dashboard

If you’re looking for a speedier approach, you can build a Google Data Studio dashboard that connects to your GSC to provide data in real time, so you don’t have to check on your reports when you think there is a cannibalization issue (credit to my colleague Dom).

Our example of a dashboard comprises two tables (see screenshots below):

The table above captures the full list of keyword offenders for the period of time selected. For instance, keyword ‘X’ at the top of the table has generated 13 organic clicks (total_clicks) from GSC over the period considered and changed ranking URL approximately 24 times (num_of_pages).

The second table (shown above) indicates the individual pages that have ranked for each keyword for the period of time selected. In this particular example, for our keyword X (which, as we know, has changed URLs 24 times in the period of time selected) the column path would show the list of individual URLs that have been flip flopping.

What solutions should I implement to tackle cannibalization?

It is important to distinguish the different types of cannibalization you may encounter and try to be flexible with solutions — not every fix will be the same.

I started touching on possible solutions when I was talking about the different types of cannibalization, but let’s take a more holistic approach and explain what solutions are available.

301 redirection

Ask yourself this question: do I really need all the pages that I found cannibalizing each other?

In several instances the answer is no, and if that is the case, 301 redirects are your friends.

For instance, you might have created a new (or very similar) version of the same article your site posted years ago, so you may consider redirecting one of them — generally speaking, the older URL might have more equity in the eyes of search engines and potentially would have attracted some backlinks over time.

Page URL Date of blog post
Page A: blog/how-to-wear-ankle-boots May 2016
Page B: blog/how-to-wear-ankle-boots-in-2019 December 2018

Check if page A has backlinks and, if so, how many keywords it is ranking for (and how well it is ranking for those keywords)

What to do:

  • If page A has enough equity and visibility, do a 301 redirect from page B to page A, change all internal links (coming from the site to page B) to page A, and update metadata of page A if necessary (including the reference of 2019 for instance)
  • If not, do the opposite: complete a 301 redirect from page A to page B and change all internal links (coming from the site to page A) to page B.

Canonicalization

In case you do need all the pages that are cannibalizing for whatever reason (maybe PPC, social, or testing purposes, or maybe it is just because they are still relevant) then canonical tags are your friends. The main difference with a 301 redirect is that both pages will still exist, while the equity from page A will be transferred to page B.

Let’s say you created a new article that covers a similar topic to another existing one (but has a different angle) and you find out that both pages are cannibalizing each other. After a quick analysis, you may decide you want Page B to be your “primary”, so you can use a canonical tag from page A pointing to page B. You would want to use canonicalization if the content of the two pages is diverse enough that users should see it but not so much that search engines should think it’s different.

Page URL Date of blog post
Page A: blog/how-to-wear-ankle-boots-with-skinny-jeans December 2017
Page B: blog/how-to-wear-high-ankle-boots January 2019

What to do:

  • Use a canonical tag from page A to page B. As a reinforcement to Google, you could also use a self-referencing canonical tag on page B.
  • After having assessed accessibility and internal link equity of both pages, you may want to change all/some internal links (coming from the site to page A) to page B if you deem it useful.

Pages re-optimisation

As already touched on, it primarily involves a metadata type of cannibalization, which is what I named as type 1 in this article. After identifying the pages whose meta data seem to overlap or somehow target the same/highly similar keywords, you will need to decide which is your primary page for that keyword/keyword group and re-optimize the competing pages.

See the example earlier in the blog post to get a better idea.

Content consolidation

This type of solution involves consolidating a part or the entire content of a page into another. Once that has happened, it is down to you to decide if it is worth keeping the page you have stripped content from or just 301 redirect it to the other.

You would use consolidation as an option if you think the cannibalization is a result of similar or duplicated content between multiple pages, which is more likely to be the type 2 of cannibalization, as stated earlier. It is essential to establish your primary page first so you are able to act on the competing internal pages. Content consolidation requires you to move the offending content to your primary page in order to stop this problem and improve your rankings.

For example, you might have created a new article that falls under a certain content theme (in this instance, boots cleaning). You then realize that a paragraph of your new page B touches on leather boots and how to take care of them, which is something you have covered in page A. In case both articles respond to similar intents (one targeting cleaning leather only, the other targeting cleaning boots in general), then it is worth consolidating the offending content from page B to page A, and add an internal link to page A instead of the paragraph that covers leather boots in page B.

Page URL Date of blog post
Page A: blog/how-to-clean-leather-boots December 2017
Page B: /blog/boots-cleaning-guide-2019/ January 2019

What to do:

  • Find the offending part of content on page B, review it and consolidate the most compelling bits to page A
  • Replace the stripped content on page B with a direct internal link pointing to page A
  • Often after having consolidated the content of a page to another, there is no scope for the page where content has been stripped from so it should just be redirected (301).

How can I avoid cannibalization in the first place?

The best way to prevent cannibalization from happening is a simple, yet underrated task, that involves keyword mapping. Implementing a correct mapping strategy for your site is a key part of your SEO, as important as your keyword research.

Carson Ward has written an awesome Moz Blog post about the topic, I recommend you have a look.

Don’t take ‘intent’ for granted

Another way to avoid cannibalization, and the last tip I want to share with you, involves something most of you are familiar with: search intent.

Most of the time, we take things for granted, assuming Google will behave in a certain way and show certain type of results. What I mean by this is: When you work on your keyword mapping, don’t forget to check what kind of results search engines display before assuming a certain outcome. Often, even Google is not sure and will not always get intent right.

For instance, when searching for ‘shoes gift ideas’ and ‘gift ideas for shoe lovers’ I get two very different SERPs despite the fact that my intent is kind of the same: I am looking for ideas for a gift which involves shoes.

The SERP on the left shows a SERP for a query of “shoes gift ideas”. It displays a row of pictures from Google Images with the link to see more, one editorial page (informational content), and then the rest of results are all transactional/e-commerce pages for me to buy from. Google has assumed that I’d like to see commercial pages as I might be close to a conversion.

The SERP on the right shows a SERP for a query of “gift ideas for show loves”, displaying a row of Google Shopping ads and then a featured snippet, taken from an editorial page, while the rest are a mix of transactional and editorial pages, with Pinterest ranking twice in the top 10. Clearly Google is not sure what I would prefer to see here. Am I still in the consideration phase or am I moving to conversion?

The example above is just one of the many I encountered when going through my keyword research and mapping task. Before going after a certain keyword/keyword cluster, try and address all these points:

  • Check if one of your existing pages has already covered it.
  • If so, how well have you covered the keyword target? What can you do to improve my focus? Is there any cannibalization that is holding you back?
  • If you do not have a page for it, is it worth creating one and what implications will it have on your existing pages?
  • Check what results Google is displaying for that keyword target, as it might be different from your expectations.
  • Once you have created a new page/s, double check this has not created unintentional and unplanned cannibalization further down the line by using the tips in this post.

Conclusion

Keyword cannibalization is an underrated, but rather significant, problem, especially for sites that have been running for several years and end up having lots of pages. However, fear not — there are simple ways to monitor this issue and hopefully this post can help you speed up the whole process to find such instances.

Most of the times, it is just a matter of using the most logical approach while considering other SEO elements such as backlinks, crawlability, and content duplication. If possible, always test your changes first before applying it at site-wide level or making them permanent.

If you, like me, are a fan of knowledge sharing and you think there are better ways to help with cannibalization, please comment below!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

The Basics of Building an Intent-Based Keyword List

Posted by TheMozTeam

This post was originally published on the STAT blog.


In this article, we’re taking a deep dive into search intent.

It’s a topic we’ve covered before with some depth. This STAT whitepaper looked at how SERP features respond to intent, and a few bonus blog posts broke things down even further and examined how individual intent modifiers impact SERP features, the kind of content that Google serves at each stage of intent, and how you can set up your very own search intent projects. (And look out for Seer’s very own Scott Taft’s upcoming post this week on how to use STAT and Power BI to create your very own search intent dashboard.)

Search intent is the new demographics, so it only made sense to get up close and personal with it. Of course, in order to bag all those juicy search intent tidbits, we needed a great intent-based keyword list. Here’s how you can get your hands on one of those.

Gather your core keywords

First, before you can even think about intent, you need to have a solid foundation of core keywords in place. These are the products, features, and/or services that you’ll build your search intent funnel around.

But goodness knows that keyword list-building is more of an art than a science, and even the greatest writers (hi, Homer) needed to invoke the muses (hey, Calliope) for inspiration, so if staring at your website isn’t getting the creative juices flowing, you can look to a few different places for help.

Snag some good suggestions from keyword research tools

Lots of folks like to use the Google Keyword Planner to help them get started. Ubersuggest and Yoast’s Google Suggest Expander will also help add keywords to your arsenal. And Answer The Public gives you all of that, and beautifully visualized to boot.

Simply plunk in a keyword and watch the suggestions pour in. Just remember to be critical of these auto-generated lists, as odd choices sometimes slip into the mix. For example, apparently we should add [free phones] to our list of [rank tracking] keywords. Huh.

Spot inspiration on the SERPs

Two straight-from-the-SERP resources that we love for keyword research are the “People also ask” box and related searches. These queries are Google-vetted and plentiful, and also give you some insight into how the search engine giant links topics.

If you’re a STAT client, you can generate reports that will give you every question in a PAA box (before it gets infinite), as well as each of the eight related searches at the bottom of a SERP. Run the reports for a couple of days and you’ll get a quick sense of which questions and queries Google favours for your existing keyword set.

A quick note about language & location

When you’re in the UK, you push a pram, not a stroller; you don’t wear a sweater, you wear a jumper. This is all to say that if you’re in the business of global tracking, it’s important to keep different countries’ word choices in mind. Even if you’re not creating content with them, it’s good to see if you’re appearing for the terms your global searchers are using.

Add your intent modifiers

Now it’s time to tackle the intent bit of your keyword list. And this bit is going to require drawing some lines in the sand because the modifiers that occupy each intent category can be highly subjective — does “best” apply transactional intent instead of commercial?

We’ve put together a loose guideline below, but the bottom line is that intent should be structured and classified in a way that makes sense to your business. And if you’re stuck for modifiers to marry to your core keywords, here’s a list of 50+ to help with the coupling.

Informational intent

The searcher has identified a need and is looking for the best solution. These keywords are the core keywords from your earlier hard work, plus every question you think your searchers might have if they’re unfamiliar with your product or services.

Your informational queries might look something like:

  • [product name]
  • what is [product name]
  • how does [product name] work
  • how do I use [product name]

Commercial intent

At this stage, the searcher has zeroed in on a solution and is looking into all the different options available to them. They’re doing comparative research and are interested in specific requirements and features.

For our research, we used best, compare, deals, new, online, refurbished, reviews, shop, top, and used.

Your commercial queries might look something like:

  • best [product name]
  • [product name] reviews
  • compare [product name]
  • what is the top [product name]
  • [colour/style/size] [product name]

Transactional intent (including local and navigational intent)

Transactional queries are the most likely to convert and generally include terms that revolve around price, brand, and location, which is why navigational and local intent are nestled within this stage of the intent funnel.

For our research, we used affordable, buy, cheap, cost, coupon, free shipping, and price.

Your transactional queries might look something like:

  • how much does [product name] cost
  • [product name] in [location]
  • order [product name] online
  • [product name] near me
  • affordable [brand name] [product name]

A tip if you want to speed things up

A super quick way to add modifiers to your keywords and save your typing fingers is by using a keyword mixer like this one. Just don’t forget that using computer programs for human-speak means you’ll have to give them the ol’ once-over to make sure they still make sense.

Audit your list

Now that you’ve reached for the stars and got yourself a huge list of keywords, it’s time to bring things back down to reality and see which ones you’ll actually want to keep around.

No two audits are going to look the same, but here are a few considerations you’ll want to keep in mind when whittling your keywords down to the best of the bunch.

  1. Relevance. Are your keywords represented on your site? Do they point to optimized pages
  2. Search volume. Are you after highly searched terms or looking to build an audience? You can get the SV goods from the Google Keyword Planner.
  3. Opportunity. How many clicks and impressions are your keywords raking in? While not comprehensive (thanks, Not Provided), you can gather some of this info by digging into Google Search Console.
  4. Competition. What other websites are ranking for your keywords? Are you up against SERP monsters like Amazon? What about paid advertising like shopping boxes? How much SERP space are they taking up? Your friendly SERP analytics platform withshare of voice capabilities (hi!) can help you understand your search landscape.
  5. Difficulty. How easy is your keyword going to be to win? Search volume can give you a rough idea — the higher the search volume, the stiffer the competition is likely to be — but for a different approach, Moz’s Keyword Explorer has a Difficulty score that takes Page Authority, Domain Authority, and projected click-through-rate into account.

By now, you should have a pretty solid plan of attack to create an intent-based keyword list of your very own to love, nurture, and cherish.

If, before you jump headlong into it, you’re curious what a good chunk of this is going to looks like in practice, give this excellent article by Russ Jones a read, or drop us a line. We’re always keen to show folks why tracking keywords at scale is the best way to uncover intent-based insights.

Read on, readers!

More in our search intent series:

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

More Website Articles

Posted in Latest NewsComments Off

Advert