Tag Archive | "Worrying"

Can SEOs Stop Worrying About Keywords and Just Focus on Topics? – Whiteboard Friday

Posted by randfish

Should you ditch keyword targeting entirely? There’s been a lot of discussion around the idea of focusing on broad topics and concepts to satisfy searcher intent, but it’s a big step to take and could potentially hurt your rankings. In today’s Whiteboard Friday, Rand discusses old-school keyword targeting and new-school concept targeting, outlining a plan of action you can follow to get the best of both worlds.

Can We Abandon Keyword Research & On-Page Targeting in Favor of a Broader Topic/Concept Focus in Our SEO Efforts?

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week, we’re going to talk about a topic that I’ve been seeing coming up in the SEO world for probably a good 6 to 12 months now. I think ever since Hummingbird came out, there has been a little bit of discussion. Then, over the last year, it’s really picked up around this idea that, “Hey, maybe we shouldn’t be optimizing for researching and targeting keywords or keyword phrases anymore. Maybe we should be going more towards topics and ideas and broad concept.”

I think there’s some merit to the idea, and then there are folks who are taking it way too far, moving away from keywords and actually losing and costing themselves so much search opportunity and search engine traffic. So I’m going to try and describe these two approaches today, kind of the old-school world and this very new-school world of concept and topic-based targeting, and then describe maybe a third way to combine them and improve on both models.

Classic keyword research & on-page targeting

In our classic keyword research, on-page targeting model, we sort of have our SEO going, “Yeah. Which one of these should I target?”

He’s thinking about like best times to fly. He’s writing a travel website, “Best Times to Fly,” and there’s a bunch of keywords. He’s checking the volume and maybe some other metrics around “best flight times,” “best days to fly,” “cheapest days to fly,” “least crowded flights,” “optimal flight dates,” “busiest days to fly.” Okay, a bunch of different keywords.

So, maybe our SEO friend here is thinking, “All right. She’s going to maybe go make a page for each of these keywords.” Maybe not all of them at first. But she’s going to decide, “Hey, you know what? I’m going after ‘optimal flight dates,’ ‘lowest airport traffic days,’ and ‘cheapest days to fly.’ I’m going to make three different pages. Yeah, the content is really similar. It’s serving a very similar purpose. But that doesn’t matter. I want to have the best possible keyword targeting that I can for each of these individual ones.”

“So maybe I can’t invest as much effort in the content and the research into it, because I have to make these three different pages. But you know what? I’ll knock out these three. I’ll do the rest of them, and then I’ll iterate and add some more keywords.”

That’s pretty old-school SEO, very, very classic model.

New school topic- & concept-based targeting

Newer school, a little bit of this concept and topic targeting, we get into this world where folks go, “You know what? I’m going to think bigger than keywords.”

“I’m going to kind of ignore keywords. I don’t need to worry about them. I don’t need to think about them. Whatever the volumes are, they are. If I do a good job of targeting searchers’ intent and concepts, Google will do a good job recognizing my content and figuring out the keywords that it maps to. I don’t have to stress about that. So instead, I’m going to think about I want to help people who need to choose the right days to buy flights.”

“So I’m thinking about days of the week, and maybe I’ll do some brainstorming and a bunch of user research. Maybe I’ll use some topic association tools to try and broaden my perspective on what those intents could be. So days of the week, the right months, the airline differences, maybe airport by airport differences, best weeks. Maybe I want to think about it by different country, price versus flexibility, when can people use miles, free miles to fly versus when can’t they.”

“All right. Now, I’ve come up with this, the ultimate guide to smart flight planning. I’ve got great content on there. I have this graph where you can actually select a different country or different airline and see the dates or the weeks of the year, or the days of the week when you can get cheapest flights. This is just an awesome, awesome piece of content, and it serves a lot of these needs really nicely.” It’s not going to rank for crap.

I don’t mean to be rude. It’s not the case that Google can never map this to these types of keywords. But if a lot of people are searching for “best days of the week to fly” and you have “The Ultimate Guide to Smart Flight Planning,” you might do a phenomenal job of helping people with that search intent. Google is not going to do a great job of ranking you for that phrase, and it’s not Google’s fault entirely. A lot of this has to do with how the Web talks about content.

A great piece of content like this comes out. Maybe lots of blogs pick it up. News sites pick it up. You write about it. People are linking to it. How are they describing it? Well, they’re describing it as a guide to smart flight planning. So those are the terms and phrases people associate with it, which are not the same terms and phrases that someone would associate with an equally good guide that leveraged the keywords intelligently.

A smarter hybrid

So my recommendation is to combine these two things. In a smart combination of these techniques, we can get great results on both sides of the aisle. Great concept and topic modeling that can serve a bunch of different searcher needs and target many different keywords in a given searcher intent model, and we can do it in a way that targets keywords intelligently in our titles, in our headlines, our sub-headlines, the content on the page so that we can actually get the searcher volume and rank for the keywords that send us traffic on an ongoing basis.

So I take my keyword research ideas and my tool results from all the exercises I did over here. I take my topic and concept brainstorm, maybe some of my topic tool results, my user research results. I take these and put them together in a list of concepts and needs that our content is going to answer grouped by combinable keyword targets — I’ll show you what I mean — with the right metrics.

So I might say my keyword groups are there’s one intent around “best days of the week,” and then there’s another intent around “best times of the year.” Yes, there’s overlap between them. There might be people who are looking for kind of both at the same time. But they actually are pretty separate in their intent. “Best days of the week,” that’s really someone who knows that they’re going to fly at some point and they want to know, “Should I be booking on a Tuesday, Wednesday, Thursday, or a Monday, or a Sunday?”

Then, there’s “best times of the year,” someone who’s a little more flexible with their travel planning, and they’re trying to think maybe a year ahead, “Should I buy in the spring, the fall, the summer? What’s the time to go here?”

So you know what? We’re going to take all the keyword phrases that we discovered over here. We’re going to group them by these concept intents. Like “best days of the week” could include the keywords “best days of the week to fly,” “optimal day of week to fly,” “weekday versus weekend best for flights,” “cheapest day of the week to fly.”

“Best times of the year,” that keyword group could include words and phrases like “best weeks of the year to fly,” “cheapest travel weeks,” “lowest cost months to fly,” “off-season flight dates,” “optimal dates to book flights.”

These aren’t just keyword matches. They’re concept and topic matches, but taken to the keyword level so that we actually know things like the volume, the difficulty, the click-through rate opportunity for these, the importance that they may have or the conversion rate that we think they’re going to have.

Then, we can group these together and decide, “Hey, you know what? The volume for all of these is higher. But these ones are more important to us. They have lower difficulty. Maybe they have higher click-through rate opportunity. So we’re going to target ‘best times of the year.’ That’s going to be the content we create. Now, I’m going to wrap my keywords together into ‘the best weeks and months to book flights in 2016.’”

That’s just as compelling a title as “The Ultimate Guide to Smart Flight Planning,” but maybe a tiny bit less. You could quibble. But I’m sure you could come up with one, and it uses our keywords intelligently. Now I’ve got sub-headings that are “sort by the cheapest,” “the least crowded,” “the most flexible,” “by airline,” “by location.” Great. I’ve hit all my topic areas and all my keyword areas at the same time, all in one piece of content.

This kind of model, where we combine the best of these two worlds, I think is the way of the future. I don’t think it pays to stick to your old-school keyword targeting methodology, nor do I think it pays to ignore keyword targeting and keyword research entirely. I think we’ve got to merge these practices and come up with something smart.

All right everyone. I look forward to your comments, and we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Moz Blog

Posted in Latest NewsComments Off

Stop Worrying About the New Google Maps; These URL Parameters Are Gold

Posted by David-Mihm

I suspect I’m not alone in saying: I’ve never been a fan of the New Google Maps.

In the interstitial weeks between that tweet and today, Google has made some noticeable improvements. But the user experience still lags in many ways relative to the classic version (chief among them: speed).

Google’s invested so heavily in this product, though, that there’s no turning back at this point. We as marketers need to come to terms with a product that will drive an increasing number of search results in the future.

Somewhat inspired by this excellent Pete Wailes post from many years ago, I set out last week to explore Google Maps with a fresh set of eyes and an open mind to see what I could discover about how it renders local business results. Below is what I discovered.

Basic URL structure

New Google Maps uses a novel URL structure (novel for me, anyway) that is not based around the traditional ? and & parameters of Classic Google Maps, but instead uses /’s and something called hashbangs to tell the browser what to render.

The easiest way to describe the structure is to illustrate it:

There are also some additional useful hashbang parameters relating to local queries that I’ll describe in further detail below.

Some actual feature improvements

Despite the performance issues, New Google Maps has introduced at least two useful URL modifiers I’ve grown to love.


This generates a stack-ranked list of businesses in a given area that Google deems relevant for the keyword you’re searching. It’s basically the equivalent of the list on the lefthand panel in Classic Google Maps but much easier to get to via direct URL. Important: am=t must always be placed after /search and before the hashbang modifiers, or else the results will break.


This feature shows you businesses that have been reviewed by Google+ experts (the equivalent of what we’ve long-called “power reviewers” or “authority reviewers” on my annual Local Search Ranking Factors survey). To my knowledge it’s the first time Google has publicly revealed who these power users are, opening up the possibility of an interesting future study correlating PlaceRank with the presence, valence, and volume of these reviews. In order to see these power reviewers, it seems like you have to be signed into a Google+ account, but perhaps others have found a way around this requirement.

Combining these two parameters yields incredibly useful results like these, which could form the basis for an influencer-targeting campaign:

Above: a screenshot of the results for: https://www.google.com/maps/search/grocery+stores+by:experts/@45.5424364,-122.654422,11z/am=t/

Local pack results and the vacuum left by tbm=plcs

Earlier this week, Steve Morgan noticed that Google crippled the ability to render place-based results from a Google search (ex: google.com/search?q=realtors&tbm=plcs). Many local rank-trackers were based on the results of these queries.

Finding a replacement for this parameter in New Google Maps turns out to be a little more difficult than it would first appear. You’ll note in the summary of URL structure above that each URL comes with a custom-baked centroid. But local pack results on a traditional Google SERP each have their own predefined viewport — i.e. the width, height, and zoom level that most closely captures the location of each listing in the pack, making it difficult to determine the appropriate zoom level.

Above: the primary SERP viewport for ‘realtors’ with location set to Seattle, WA.

Note that if you click that link of “Map for realtors” today, and then add the /am=t parameter to the resulting URL, you tend to get a different order of results than what appears in the pack.

I’m not entirely sure as to why the order changes–one theory is that Google is now back to blending pack results (using both organic and maps algorithms). Another theory is that the aspect ratio on the viewport on the /am=t window is invariably square, which yields a different set of relevant results than the “widescreen” viewport on the primary SERP.

One thing I have found helps with replicability is to leave the @lat,lng,zoom parameters out of the URL, and let Google automatically generate them for you.

Here are a couple of variations that I encourage you to try:

followed by:

Take a closer look at those trailing parameters and you’ll see a structure that looks like this:

The long string starting with 0x and ending with 9a is the Feature ID of the centroid of the area in which you’re searching (in this case, Seattle). Incidentally, this feature ID is also rendered by Google Mapmaker using a URL similar to http://www.google.com/mapmaker?gw=39&fid={your_fid}.

This is the easy part. You can find this string by typing the URL:


waiting for the browser to refresh, and then copying it from the end of the resulting URL.

The hard part is figuring out which hashbang combo will generate which order of results, and I still haven’t been able to do it. I’m hoping that by publishing this half-complete research, some enterprising Moz reader might be able to complete the puzzle! And there’s also the strong possibility that this theory is completely off base.

In my research thus far, the shorter hashbang combination (!3m1!4b1) seems to yield the closest results to what tbm=plcs used to render, but they aren’t 100% identical.

The longer hashbang combination (!3m1!4b1!4m5!2m4!3m3) actually seems to predictably return the same set of results as a Local search on Google Plus — and note the appearance of the pushpin icon next to the keyword when you add this longer combination:

Who’s #1?

Many of us in the SEO community, even before the advent of (not provided), encouraged marketers and business owners to stop obsessing about individual rankings and start looking at visibility in a broader sense. Desperately scrambling for a #1 ranking on a particular keyword has long been a foolish waste of resources.

Google’s desktop innovations in local search add additional ammunition to this argument. Heat map studies have shown that the first carousel result is far from dominant, and that a compelling Google+ profile photo can perform incredibly well even as far down the “sixth or seventh” (left to right) spot.  Ranking #1 in the carousel doesn’t provide quite the same visual benefit as ranking #1 in an organic SERP or 7-pack.

The elimination of the lefthand list pane on New Google Maps makes an even stronger case. It’s literally impossible to rank these businesses visually no matter how hard you stare at the map:

Mobile, mobile, mobile

Paradoxically, though, just as Google is moving away from ranked results on the desktop, my view is that higher rankings matter more than ever in mobile search. And as mobile and wearables continue to gain market share relative to desktop, that trend is likely to increase.

The increasing ubiquity of Knowledge Panels in search results the past couple of years has been far from subtle. Google is now not only attempting to organize the world’s information, but condense each piece of it into a display that will fit on a Google Glass (or Google Watch, or certainly a Google Android phone).

Nowhere is the need to be #1 more dramatic than in the Google Maps app, in which users perform an untold number of searches each month. List view is completely hidden (I didn’t even know it existed until this week) and an average user is just as likely to think the first result is the only one for them as they are to figure out they need to swipe right to view more businesses.

Above: a Google Maps app result for ‘golf courses’, in which the first result has a big-time advantage.

The other issue that mobile results really bring to the fore is that the user is becoming the centroid.

This is true even when searching from the desktop. I performed some searches one morning from a neighborhood coffee shop with wifi, and a few minutes later from my house six blocks away. To my surprise, I got completely different results. From my house, Google is apparently only able to detect that I’m somewhere in “Portland.” But from the coffee shop, it was able to detect my location at a much more granular level (presumably due to the coffee shop’s wifi?), and showed me results specific to my ZIP code, with the centroid placed at the center of that ZIP.  And the zoom setting for both adjusted automatically–the more granular ZIP code targeting defaulted to a zoom level of 15z or 16z, versus 11z to 13z from my home, where Google wasn’t as sure of my location.

Note, too, that I was unable to be exact about the zoom level in the previous paragraph. That’s because the centroid is category-dependent. It likely always has been category dependent but that fact is much more noticeable in New Google Maps.

Maps app visibility

Taking both of these into account, in terms of replicating Google Maps App visibility, here is a case where specifying @lat,lng,zoom (with the zoom set to 17z)can be incredibly useful. 

As an example, I performed the search below from my iPhone at the hotel I was staying at in Little Italy after a recent SEM SD event. And was able to replicate the results with this URL string on desktop:


Conclusions and recommendations

While I still feel the user experience of New Google Maps is subpar, as a marketer I found myself developing a very Strangelovian mindset over the past week or so – I have actually learned to stop worrying and love the new Google Maps. There are some incredibly useful new URL parameters that allow for a far more complete picture of local search visibility than the classic Google Maps provided.

With this column, I wanted to at least present a first stab to the Moz community to hopefully build on and experiment with. But this is clearly an area that is ripe for more research, particularly with an eye towards finding a complete replacement for the old tbm=plcs parameter.

As mobile usage continues to skyrocket, identifying the opportunities in your (or your client’s) competitive set using the new Google Maps will only become more important.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Moz Blog

Related Articles

Posted in Latest NewsComments Off