Tag Archive | "Major"

Subfolders, Subdirectories and Subdomains: The URL difference that can drive a major increase in organic traffic

Do you understand the difference between subdomains and subdirectories/subfolders? Using them wrong could cost you organic traffic.
MarketingSherpa Blog

Posted in Latest NewsComments Off

A Look Back at a Great 2017: 5 Major Moz Product Investments and a Sneak Peek Into 2018

Posted by adamf

It’s hard to believe that 2017 is already past. We entered the year with big ambitions and we’ve made some great strides. As has become tradition, I’ve compiled a rundown of some of the most interesting updates that you may have seen (or missed) this past year. We’ve intentionally focused on significant product updates, but I’ve also shared a little about some newer programs that provide value for customers in different ways.

TL;DR, here are some of the larger and more interesting additions to Moz in 2017:

  1. Keywords by Site: Keyword Explorer adds site-based keyword research and competitive intelligence
  2. Site Crawl V2: Overhauled Site Crawl for better auditing and workflow
  3. Major investments in infrastructure: Better performance and resilience across the Moz toolset
  4. New instructor-led training programs: Targeted classes to level-up your SEO knowledge
  5. Customer Success: Custom walkthroughs to help you get the most out of Moz
  6. Bonus! MozPod: Moz’s new free podcast keeps you up to date on the latest industry topics and trends

Big updates

This year and last, we’ve been spending a disproportionate focus on releasing large infrastructural improvements, new datasets, and foundational product updates. We feel these are crucial elements that serve the core needs of SEOs and will fuel frequent improvements and iterations for years to come.

To kick things off, I wanted to share some details about two big updates from 2017.


1) Keywords by Site: Leveling up keyword research and intelligence

Rank tracking provides useful benchmarks and insights for specific, targeted keywords, but you can’t track all of the keywords that are relevant to you. Sometimes you need a broader look at how visible your sites (and your competitors’ sites) are in Google results.

We built Keywords by Site to provide this powerful view into your Google presence. This brand-new dataset in Moz significantly extends Keyword Explorer and improves the quality of results in many other areas throughout Moz Pro. Our US corpus currently includes 40 million Google SERPs updated every two weeks, and allows you to do the following:

See how visible your site is in Google results

This view not only shows how authoritative a site is from a linking perspective, but also shows how prominent a site is in Google search results.

Compare your ranking prominence to your competitors

Compare up to three sites to get a feel for their relative scale of visibility and keyword ranking overlap. Click on any section in the Venn diagram to view the keywords that fall into that section.

Dig deep: Sort, filter, and find opportunities, then stash them in keyword lists

For example, let’s say you’re looking to determine which pages or content on your site might only require a little nudge to garner meaningful search visibility and traffic. Run a report for your site in Keyword Explorer and then use the filters to quickly hone in on these opportunities:

Our focus on data quality

We’ve made a few decisions to help ensure the freshness and accuracy of our keyword corpus. These extend the cost and work to maintain this dataset, but we feel they make a discernible difference in quality.

  • We recollect all of our keyword data every 2 weeks. This means that the results you see are more recent and more similar to the results on the day that you’re researching.
  • We cycle up to 15 million of our keywords out on a monthly basis. This means that as new keywords or terms trend up in popularity, we add them to our corpus, replacing terms that are no longer getting much search volume.

A few improvements we’ve made since launch:

  • Keyword recommendations in your campaigns (tracked sites) are much improved and now backed by our keyword corpus.
  • These keyword suggestions are also included in your weekly insights, suggesting new keywords worth tracking and pages worth optimizing.
  • Coming very soon: We’re also on the cusp of launching keyword corpuses for the UK, Canada, and Australia. Stay tuned.

A few resources to help you get more from Keywords by Site:

Try out Keywords by Site!


2) Site Crawl V2: Big enhancements to site crawling and auditing

Another significant project we completed in 2017 was a complete rewrite of our aging Site Crawler. In short, our new crawler is faster, more reliable, can crawl more pages, and surfaces more issues. We’ve also made some enhancements to the workflow, to make regular crawls more customizable and easy to manage. Here are a few highlights:

Week-over-week crawl comparisons

Our new crawler keeps tabs on what happened in your previous crawl to show you which specific issues are no longer present, and which are brand new.

Ignore (to hide) individual issues or whole issue types

This feature was added in response to a bunch of customer requests. While Moz does its best to call out the issues and priorities that apply to most sites, not all sites or SEOs have the same needs. For example, if you regularly noindex a big portion of your site, you don’t need us to keep reminding you that you’ve applied noindex to a huge number of pages. If you don’t want them showing your reports, just ignore individual issues or the entire issue type.

Another workflow improvement we added was the ability to mark an issue as fixed. This allows you to get it out of your way until the next crawl runs and verifies the fix.

All Pages view with improved sorting and filtering

If you’re prioritizing across a large number of pages or trying to track down an issue in a certain area of your site, you can now sort all pages crawled by Issue Count, Page Authority, or Crawl Depth. You can also filter to show, for instance, all pages in the /blog section of my site that are redirects, and have a crawl issue.

Recrawl to verify fixes

Moz’s crawler monitors your site by crawling it every week. But if you’ve made some changes and want to verify them, you can now recrawl your site in between regular weekly crawls instead of waiting for the next crawl the start.

Seven new issues checked and tracked

These include such favorites as detecting Thin Content, Redirect Chains, and Slow Pages. While we were at it, we revamped duplicate page detection and improved the UI to help you better analyze clusters of duplicate content and figure out which page should be canonical.

A few resources to help you get more from Site Crawl:


3) Major investments in infrastructure for performance and resilience

You may not have directly noticed many of the updates we’ve made this year. We made some significant investments in Moz Pro and Moz Local to make them faster, more reliable, and allow us to build new features more quickly. But here are a few tangible manifestations of these efforts:

“Infinite” history on organic Moz Pro search traffic reports

Okay, infinite is a bit of a stretch, but we used to only show the last 12 months or weeks of data. Now we’ll show data from the very inception of a campaign, broken down by weeks or months. This is made possible by an updated architecture that makes full historical data easy to surface and present in the application. It also allows for custom access to selected date ranges.

Also worth noting is that the new visualization shows how many different pages were receiving organic search traffic in context with total organic search traffic. This can help you figure out whether traffic increase was due to improved rankings across many pages, or just a spike in organic traffic for one or a few pages.

More timely and reliable access to Moz Local data at all scales

As Moz Local has brought on more and bigger customers with large numbers of locations, the team discovered a need to bolster systems for speed and reliability. A completely rebuilt scheduling system and improved core location data systems help ensure all of your data is collected and easy to access when you need it.

Improved local data distribution

Moz Local distributes your location data through myriad partners, each of which have their own formats and interfaces. The Local team updated and fine-tuned those third-party connections to improve the quality of the data and speed of distribution.


4) New instructor-led training programs: Never stop learning

Not all of our improvements this year have shown up in the product. Another investment we’ve made is in training. We’ve gotten a lot of requests for this over the years and are finally delivering. Brian Childs, our trainer extraordinaire, has built this program from the ground up. It includes:

  • Boot camps to build up core skills
  • Advanced Seminars to dig into more intensive topics
  • Custom Training for businesses that want a more tailored approach

We have even more ambitious plans for 2018, so if training interests you, check out all of our training offerings here.


5) Customer Success: Helping customers get the most out of Moz

Our customer success program took off this year and has one core purpose: to help customers get maximum value from Moz. Whether you’re a long-time customer looking to explore new features or you’re brand new to Moz and figuring out how to get started, our success team offers product webinars every week, as well as one-on-one product walkthroughs tailored to your needs, interests, and experience level.

The US members of our customer success team hone their skills at a local chocolate factory (Not pictured: our fantastic team members in the UK, Australia, and Dubai)

If you want to learn more about Moz Pro, check out a webinar or schedule a walkthrough.


Bonus! MozPod: Moz’s new free podcast made its debut

Okay, this really strays from product news, but another fun project that’s been gaining momentum is MozPod. This came about as a side passion project by our ever-ambitious head trainer. Lord knows that SEO and digital marketing are fast-moving and ever-changing; to help you keep up on hot topics and new developments, we’ve started the Mozpod. This podcast covers a range of topics, drawing from the brains of key folks in the industry. With topics ranging from structured data and app store optimization to machine learning and even blockchain, there’s always something interesting to learn about. If you’ve got an idea for an episode or a topic you’d like to hear about, submit it here.

Join Brian every week for a new topic and guest:


What’s next?

We have a lot planned for 2018 — probably way too much. But one thing I can promise is that it won’t be a dull year. I prefer not to get too specific about projects that we’ve not yet started, but here are a few things already in the works:

  • A significant upgrade to our link data and toolset
  • On-demand Site Crawl
  • Added keyword research corpuses for the UK, Australia, and Canada
  • Expanded distribution channels for local to include Facebook, Waze, and Uber
  • More measurement and analytics features around local rankings, categories, & keywords
  • Verticalized solutions to address specific local search needs in the restaurant, hospitality, financial, legal, & medical sectors

On top of these and many other features we’re considering, we also plan to make it a lot easier for you to use our products. Right now, we know it can be a bit disjointed within and between products. We plan to change that.

We’ve also waited too long to solve for some specific needs of our agency customers. We’re prioritizing some key projects that’ll make their jobs easier and their relationships with Moz more valuable.


Thank you!

Before I go, I just want to thank you all for sharing your support, suggestions, and critical feedback. We strive to build the best SEO data and platform for our diverse and passionate customers. We could not succeed without you. If you’d like to be a part of making Moz a better platform, please let us know. We often reach out to customers and community members for feedback and insight, so if you’re the type who likes to participate in user research studies, customer interviews, beta tests, or surveys, please volunteer here.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

How on-site search can drive holiday revenue & help e-commerce sites compete against major retailers

According to SLI Systems, people who use on-site search are more likely to make a purchase than visitors who only browse a website.

The post How on-site search can drive holiday revenue & help e-commerce sites compete against major retailers appeared first on Search Engine Land.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

Big Data, Big Problems: 4 Major Link Indexes Compared

Posted by russangular

Given this blog’s readership, chances are good you will spend some time this week looking at backlinks in one of the growing number of link data tools. We know backlinks continue to be one of, if not the most important
parts of Google’s ranking algorithm. We tend to take these link data sets at face value, though, in part because they are all we have. But when your rankings are on the line, is there a better way to get at which data set is the best? How should we go
about assessing these different link indexes like
Moz,
Majestic, Ahrefs and SEMrush for quality? Historically, there have been 4 common approaches to this question of index quality…

  • Breadth: We might choose to look at the number of linking root domains any given service reports. We know
    that referring domains correlates strongly with search rankings, so it makes sense to judge a link index by how many unique domains it has
    discovered and indexed.
  • Depth: We also might choose to look at how deep the web has been crawled, looking more at the total number of URLs
    in the index, rather than the diversity of referring domains.
  • Link Overlap: A more sophisticated approach might count the number of links an index has in common with Google Webmaster
    Tools.
  • Freshness: Finally, we might choose to look at the freshness of the index. What percentage of links in the index are
    still live?

There are a number of really good studies (some newer than others) using these techniques that are worth checking out when you get a chance:

  • BuiltVisible analysis of Moz, Majestic, GWT, Ahrefs and Search Metrics
  • SEOBook comparison of Moz, Majestic, Ahrefs, and Ayima
  • MatthewWoodward
    study of Ahrefs, Majestic, Moz, Raven and SEO Spyglass
  • Marketing Signals analysis of Moz, Majestic, Ahrefs, and GWT
  • RankAbove comparison of Moz, Majestic, Ahrefs and Link Research Tools
  • StoneTemple study of Moz and Majestic

While these are all excellent at addressing the methodologies above, there is a particular limitation with all of them. They miss one of the
most important metrics we need to determine the value of a link index: proportional representation to Google’s link graph
. So here at Angular Marketing, we decided to take a closer look.

Proportional representation to Google Search Console data

So, why is it important to determine proportional representation? Many of the most important and valued metrics we use are built on proportional
models. PageRank, MozRank, CitationFlow and Ahrefs Rank are proportional in nature. The score of any one URL in the data set is relative to the
other URLs in the data set. If the data set is biased, the results are biased.

A Visualization

Link graphs are biased by their crawl prioritization. Because there is no full representation of the Internet, every link graph, even Google’s,
is a biased sample of the web. Imagine for a second that the picture below is of the web. Each dot represents a page on the Internet,
and the dots surrounded by green represent a fictitious index by Google of certain sections of the web.

Of course, Google isn’t the only organization that crawls the web. Other organizations like Moz,
Majestic, Ahrefs, and SEMrush
have their own crawl prioritizations which result in different link indexes.

In the example above, you can see different link providers trying to index the web like Google. Link data provider 1 (purple) does a good job
of building a model that is similar to Google. It isn’t very big, but it is proportional. Link data provider 2 (blue) has a much larger index,
and likely has more links in common with Google that link data provider 1, but it is highly disproportional. So, how would we go about measuring
this proportionality? And which data set is the most proportional to Google?

Methodology

The first step is to determine a measurement of relativity for analysis. Google doesn’t give us very much information about their link graph.
All we have is what is in Google Search Console. The best source we can use is referring domain counts. In particular, we want to look at
what we call
referring domain link pairs. A referring domain link pair would be something like ask.com->mlb.com: 9,444 which means
that ask.com links to mlb.com 9,444 times.

Steps

  1. Determine the root linking domain pairs and values to 100+ sites in Google Search Console
  2. Determine the same for Ahrefs, Moz, Majestic Fresh, Majestic Historic, SEMrush
  3. Compare the referring domain link pairs of each data set to Google, assuming a
    Poisson Distribution
  4. Run simulations of each data set’s performance against each other (ie: Moz vs Maj, Ahrefs vs SEMrush, Moz vs SEMrush, et al.)
  5. Analyze the results

Results

When placed head-to-head, there seem to be some clear winners at first glance. In head-to-head, Moz edges out Ahrefs, but across the board, Moz and Ahrefs fare quite evenly. Moz, Ahrefs and SEMrush seem to be far better than Majestic Fresh and Majestic Historic. Is that really the case? And why?

It turns out there is an inversely proportional relationship between index size and proportional relevancy. This might seem counterintuitive,
shouldn’t the bigger indexes be closer to Google? Not Exactly.

What does this mean?

Each organization has to create a crawl prioritization strategy. When you discover millions of links, you have to prioritize which ones you
might crawl next. Google has a crawl prioritization, so does Moz, Majestic, Ahrefs and SEMrush. There are lots of different things you might
choose to prioritize…

  • You might prioritize link discovery. If you want to build a very large index, you could prioritize crawling pages on sites that
    have historically provided new links.
  • You might prioritize content uniqueness. If you want to build a search engine, you might prioritize finding pages that are unlike
    any you have seen before. You could choose to crawl domains that historically provide unique data and little duplicate content.
  • You might prioritize content freshness. If you want to keep your search engine recent, you might prioritize crawling pages that
    change frequently.
  • You might prioritize content value, crawling the most important URLs first based on the number of inbound links to that page.

Chances are, an organization’s crawl priority will blend some of these features, but it’s difficult to design one exactly like Google. Imagine
for a moment that instead of crawling the web, you want to climb a tree. You have to come up with a tree climbing strategy.

  • You decide to climb the longest branch you see at each intersection.
  • One friend of yours decides to climb the first new branch he reaches, regardless of how long it is.
  • Your other friend decides to climb the first new branch she reaches only if she sees another branch coming off of it.

Despite having different climb strategies, everyone chooses the same first branch, and everyone chooses the same second branch. There are only
so many different options early on.

But as the climbers go further and further along, their choices eventually produce differing results. This is exactly the same for web crawlers
like Google, Moz, Majestic, Ahrefs and SEMrush. The bigger the crawl, the more the crawl prioritization will cause disparities. This is not a
deficiency; this is just the nature of the beast. However, we aren’t completely lost. Once we know how index size is related to disparity, we
can make some inferences about how similar a crawl priority may be to Google.

Unfortunately, we have to be careful in our conclusions. We only have a few data points with which to work, so it is very difficult to be
certain regarding this part of the analysis. In particular, it seems strange that Majestic would get better relative to its index size as it grows,
unless Google holds on to old data (which might be an important discovery in and of itself). It is most likely that at this point we can’t make
this level of conclusion.

So what do we do?

Let’s say you have a list of domains or URLs for which you would like to know their relative values. Your process might look something like
this…

  • Check Open Site Explorer to see if all URLs are in their index. If so, you are looking metrics most likely to be proportional to Google’s link graph.
  • If any of the links do not occur in the index, move to Ahrefs and use their Ahrefs ranking if all you need is a single PageRank-like metric.
  • If any of the links are missing from Ahrefs’s index, or you need something related to trust, move on to Majestic Fresh.
  • Finally, use Majestic Historic for (by leaps and bounds) the largest coverage available.

It is important to point out that the likelihood that all the URLs you want to check are in a single index increases as the accuracy of the metric
decreases. Considering the size of Majestic’s data, you can’t ignore them because you are less likely to get null value answers from their data than
the others. If anything rings true, it is that once again it makes sense to get data
from as many sources as possible. You won’t
get the most proportional data without Moz, the broadest data without Majestic, or everything in-between without Ahrefs.

What about SEMrush? They are making progress, but they don’t publish any relative statistics that would be useful in this particular
case. Maybe we can hope to see more from them soon given their already promising index!

Recommendations for the link graphing industry

All we hear about these days is big data; we almost never hear about good data. I know that the teams at Moz,
Majestic, Ahrefs, SEMrush and others are interested in mimicking Google, but I would love to see some organization stand up against the
allure of
more data in favor of better data—data more like Google’s. It could begin with testing various crawl strategies to see if they produce
a result more similar to that of data shared in Google Search Console. Having the most Google-like data is certainly a crown worth winning.

Credits

Thanks to Diana Carter at Angular for assistance with data acquisition and Andrew Cron with statistical analysis. Thanks also to the representatives from Moz, Majestic, Ahrefs, and SEMrush for answering questions about their indices.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

SearchCap: Bing Talks Search Share, Matt Cutts On Future With Google & Major Drop In Traffic After Loss Of Snippets

Below is what happened in search today, as reported on Search Engine Land and from other places across the web. From Search Engine Land: Bing: It’s Unlikely That We’ll Take Search Share Away From Google Microsoft’s Director of Search admitted this week that Bing isn’t likely to put a significant…



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

We Removed a Major Website from Google Search, for Science!

Posted by Cyrus-Shepard

The folks at Groupon surprised us earlier this summer when they reported the
results of an experiment that showed that up to 60% of direct traffic is organic.

In order to accomplish this, Groupon de-indexed their site, effectively removing themselves from Google search results. That’s crazy talk!

Of course, we knew we had to try this ourselves.

We rolled up our sleeves and chose to de-index
Followerwonk, both for its consistent Google traffic and its good analytics setup—that way we could properly measure everything. We were also confident we could quickly bring the site back into Google’s results, which minimized the business risks.

(We discussed de-indexing our main site moz.com, but… no soup for you!)

We wanted to measure and test several things:

  1. How quickly will Google remove a site from its index?
  2. How much of our organic traffic is actually attributed as direct traffic?
  3. How quickly can you bring a site back into search results using the URL removal tool?

Here’s what happened.

How to completely remove a site from Google

The fastest, simplest, and most direct method to completely remove an entire site from Google search results is by using the
URL removal tool

We also understood, via statements by Google engineers, that using this method gave us the biggest chance of bringing the site back, with little risk. Other methods of de-indexing, such as using meta robots NOINDEX, might have taken weeks and caused recovery to take months.

CAUTION: Removing any URLs from a search index is potentially very dangerous, and should be taken very seriously. Do not try this at home; you will not pass go, and will not collect $ 200!

After submitting the request, Followerwonk URLs started
disappearing from Google search results in 2-3 hours

The information needs to propagate across different data centers across the globe, so the effect can be delayed in areas. In fact, for the entire duration of the test, organic Google traffic continued to trickle in and never dropped to zero.

The effect on direct vs. organic traffic

In the Groupon experiment, they found that when they lost organic traffic, they
actually lost a bunch of direct traffic as well. The Groupon conclusion was that a large amount of their direct traffic was actually organic—up to 60% on “long URLs”.

At first glance, the overall amount of direct traffic to Followerwonk didn’t change significantly, even when organic traffic dropped.

In fact, we could find no discrepancy in direct traffic outside the expected range.

I ran this by our contacts at Groupon, who said this wasn’t totally unexpected. You see, in their experiment they saw the biggest drop in direct traffic on
long URLs, defined as a URL that is at least as long enough to be in a subfolder, like https://followerwonk.com/bio/?q=content+marketer.

For Followerwonk, the vast majority of traffic goes to the homepage and a handful of other URLs. This means we didn’t have a statistically significant sample size of long URLs to judge the effect. For the long URLs we were able to measure, the results were nebulous. 

Conclusion: While we can’t confirm the Groupon results with our outcome, we can’t discount them either.

It’s quite likely that a portion of your organic traffic is attributed as direct. This is because of different browsers, operating systems and user privacy settings can potentially block referral information from reaching your website.

Bringing your site back from death

After waiting 2 hours,
we deleted the request. Within a few hours all traffic returned to normal. Whew!

Does Google need to recrawl the pages?

If the time period is short enough, and you used the URL removal tool, apparently not.

In the case of Followerwonk, Google removed over
300,000 URLs from its search results, and made them all reappear in mere hours. This suggests that the domain wasn’t completely removed from Google’s index, but only “masked” from appearing for a short period of time.

What about longer periods of de-indexation?

In both the Groupon and Followerwonk experiments, the sites were only de-indexed for a short period of time, and bounced back quickly.

We wanted to find out what would happen if you de-indexed a site for a longer period, like
two and a half days?

I couldn’t convince the team to remove any of our sites from Google search results for a few days, so I choose a smaller personal site that I often subject to merciless SEO experiments.

In this case, I de-indexed the site and didn’t remove the request until three days later. Even with this longer period, all URLs returned within just
a few hours of cancelling the URL removal request.

In the chart below, we revoked the URL removal request on Friday the 25th. The next two days were Saturday and Sunday, both lower traffic days.

Test #2: De-index a personal site for 3 days

Likely, the URLs were still in Google’s index, so we didn’t have to wait for them to be recrawled. 

Here’s another shot of organic traffic before and after the second experiment.

For longer removal periods, a few weeks for example, I speculate Google might drop these semi-permanently from the index and re-inclusion would comprise a much longer time period.

What we learned

  1. While a portion of your organic traffic may be attributed as direct (due to browsers, privacy settings, etc) in our case the effect on direct traffic was negligible.
  2. If you accidentally de-index your site using Google Webmaster Tools, in most cases you can quickly bring it back to life by deleting the request.
  3. Reinclusion happens quickly even after we removed a site for over 2 days. Longer than this, the result is unknown, and you could have problems getting all the pages of your site indexed again.

Further reading

Moz community member Adina Toma wrote an excellent YouMoz post on the re-inclusion process using the same technique, with some excellent tips for other, more extreme situations.

Big thanks to
Peter Bray for volunteering Followerwonk for testing. You are a brave man!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Major upgrade to our free Scout app

Author (displayed on the page): 

Scout helps you get a better understanding of your markets and customers, by helping to reveal the language of your markets. The app is designed to look at any website, social media channel like Facebook, or customer forum where people are busy discussing your products, and pull out the most relevant keywords. Armed with this keyword knowledge, you can be much more confident in optimizing the content of your web pages, and marketing campaigns.

How the new features work

Head over to any web page and open up Scout. First off, you’ll see a word cloud from the Summary tab. The word cloud reveals the language of the web page you are on. Drag the slider left to see the most relevant keywords, or to the right to see a much broader picture of the page’s language.

Click on the Insights tab

Check out the Insights tab to get deeper insight into how relevant the keywords are on the page. The slider is there again, and in addition you can filter out keyword terms by # of words – so you could simply look at keyword terms with only 3, 4 or 5 words in, which would include “hunger games catching fire.”

The Keywords tab

The Keywords tab is where we’ve added the territory segmentation, including Mexico, Brazil, Spain and Australia. The Keyword tab gives you search volumes and competition data, by territory, drawn from our database of 3.5 billion searches. Again, there are interactive controls for you to filter and focus your results.

Scout integrates seamlessly into the Wordtracker tool

Log into your Wordtracker account (or sign-up if you’re new). Now you can go ahead and save all the keywords that your Scout research revealed into your existing projects – or if you need, create new ones. Supercharge the results of your Scout research from within your regular Wordtracker account and stay organized.

The Scout updates are in Chrome right now, so please head over, download (it’s free!), and let us know what you think.

Want to win an awesome prize? In the next few weeks we’ve got a competition planned around Scout, with a fun prize, so watch this space.

Wordtracker Blog

Find More Articles

Posted in Latest NewsComments Off

Major Entertainment Groups Accuse Google, Bing Of Directing Users To Illegal Content

Several major UK entertainment industry groups are accusing Google and Bing of directing searchers to illegal content, and have proposed a “Code of Practice” for how search engines can better encourage consumers to locate legal content on the web. The groups are also calling for the UK…



Please visit Search Engine Land for the full article.




Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off


Advert