Tag Archive | "&ndash"

Want to Speak at MozCon 2018? Here’s Your Chance – Pitch to Be a Community Speaker!

Posted by Danielle_Launders

MozCon 2018 is nearing and it’s almost time to brush off that microphone. If speaking at MozCon is your dream, then we have the opportunity of a lifetime for you! Pitch us your topic and you may be selected to join us as one of our six community speakers.

What is a community speaker, you ask? MozCon sessions are by invite only, meaning we reach out to select speakers for the majority of our talks. But every year we reserve six 15-minute community speaking slots, where we invite anyone in the SEO community to pitch to present at MozCon. These sessions are both an attendee favorite and a fabulous opportunity to break into the speaking circuit.

Katie Cunningham, one of last year’s community speakers, on stage at MozCon 2017

Interested in pitching your own idea? Read on for everything you need to know:

The details

  • Fill out the community speaker submission form
  • Only one submission per person — make sure to choose the one you’re most passionate about!
  • Pitches must be related to online marketing and for a topic that can be covered in 15 minutes
  • Submissions close on Sunday, April 22nd at 5pm PDT
  • All decisions are final
  • All speakers must adhere to the MozCon Code of Conduct
  • You’ll be required to present in Seattle at MozCon

Ready to pitch your idea?

If you submit a pitch, you’ll hear back from us regardless of your acceptance status.

What you’ll get as a community speaker:

  • 15 minutes on the MozCon stage for a keynote-style presentation, followed by 5 minutes of Q&A
  • A free ticket to MozCon (we can issue a refund or transfer if you have already purchased yours)
  • Four nights of lodging covered by Moz at our partner hotel
  • Reimbursement for your travel — up to $ 500 for domestic and $ 750 for international travel
  • An additional free MozCon ticket for you to give away, plus a code for $ 300 off of one ticket
  • An invitation for you and your significant other to join us for the pre-event speakers dinner

The selection process:

We have an internal committee of Mozzers that review every pitch. In the first phase we review only the topics to ensure that they’re a good fit for our audience. After this first phase, we look at the entirety of the pitch to help us get a comprehensive idea of what to expect from your talk on the MozCon stage.

Want some advice for perfecting your pitch?

  • Keep your pitch focused to online marketing. The more actionable the pitch, the better.
  • Be detailed! We want to know the actual tactics our audience will be learning about. Remember, we receive a ton of pitches, so the more you can explain, the better!
  • Review the topics already being presented — we’re looking for something new to add to the stage.
  • Keep the pitch to under 1200 characters. We’re strict with the word limits — even the best pitches will be disqualified if they don’t abide by the rules.
  • No pitches will be evaluated in advance, so please don’t ask :)
  • Using social media to lobby your pitch won’t help. Instead, put your time and energy into the actual pitch itself!
  • Linking to a previous example of a slide deck or presentation isn’t required, but it does help the committee a ton.

You’ve got this!

This could be you.

If your pitch is selected, the MozCon team will help you along the way. Whether this is your first time on stage or your twentieth, we want this to be your best talk to date. We’re here to answer questions that may come up and to work with you to deliver something you’re truly proud of. Here are just a handful of ways that we’re here to help:

  • Topic refinement
  • Helping with your session title and description
  • Reviewing any session outlines and drafts
  • Providing plenty of tips around best practices — specifically with the MozCon stage in mind
  • Comprehensive show guide
  • Being available to listen to you practice your talk
  • Reviewing your final deck
  • A full stage tour on Sunday to meet our A/V crew, see your presentation on the big screens, and get a feel for the show
  • An amazing 15-person A/V team

Make your pitch to speak at MozCon!

We can’t wait to see what y’all come up with. Best of luck!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Does Googlebot Support HTTP/2? Challenging Google’s Indexing Claims – An Experiment

Posted by goralewicz

I was recently challenged with a question from a client, Robert, who runs a small PR firm and needed to optimize a client’s website. His question inspired me to run a small experiment in HTTP protocols. So what was Robert’s question? He asked…

Can Googlebot crawl using HTTP/2 protocols?

You may be asking yourself, why should I care about Robert and his HTTP protocols?

As a refresher, HTTP protocols are the basic set of standards allowing the World Wide Web to exchange information. They are the reason a web browser can display data stored on another server. The first was initiated back in 1989, which means, just like everything else, HTTP protocols are getting outdated. HTTP/2 is one of the latest versions of HTTP protocol to be created to replace these aging versions.

So, back to our question: why do you, as an SEO, care to know more about HTTP protocols? The short answer is that none of your SEO efforts matter or can even be done without a basic understanding of HTTP protocol. Robert knew that if his site wasn’t indexing correctly, his client would miss out on valuable web traffic from searches.

The hype around HTTP/2

HTTP/1.1 is a 17-year-old protocol (HTTP 1.0 is 21 years old). Both HTTP 1.0 and 1.1 have limitations, mostly related to performance. When HTTP/1.1 was getting too slow and out of date, Google introduced SPDY in 2009, which was the basis for HTTP/2. Side note: Starting from Chrome 53, Google decided to stop supporting SPDY in favor of HTTP/2.

HTTP/2 was a long-awaited protocol. Its main goal is to improve a website’s performance. It’s currently used by 17% of websites (as of September 2017). Adoption rate is growing rapidly, as only 10% of websites were using HTTP/2 in January 2017. You can see the adoption rate charts here. HTTP/2 is getting more and more popular, and is widely supported by modern browsers (like Chrome or Firefox) and web servers (including Apache, Nginx, and IIS).

Its key advantages are:

  • Multiplexing: The ability to send multiple requests through a single TCP connection.
  • Server push: When a client requires some resource (let’s say, an HTML document), a server can push CSS and JS files to a client cache. It reduces network latency and round-trips.
  • One connection per origin: With HTTP/2, only one connection is needed to load the website.
  • Stream prioritization: Requests (streams) are assigned a priority from 1 to 256 to deliver higher-priority resources faster.
  • Binary framing layer: HTTP/2 is easier to parse (for both the server and user).
  • Header compression: This feature reduces overhead from plain text in HTTP/1.1 and improves performance.

For more information, I highly recommend reading “Introduction to HTTP/2” by Surma and Ilya Grigorik.

All these benefits suggest pushing for HTTP/2 support as soon as possible. However, my experience with technical SEO has taught me to double-check and experiment with solutions that might affect our SEO efforts.

So the question is: Does Googlebot support HTTP/2?

Google’s promises

HTTP/2 represents a promised land, the technical SEO oasis everyone was searching for. By now, many websites have already added HTTP/2 support, and developers don’t want to optimize for HTTP/1.1 anymore. Before I could answer Robert’s question, I needed to know whether or not Googlebot supported HTTP/2-only crawling.

I was not alone in my query. This is a topic which comes up often on Twitter, Google Hangouts, and other such forums. And like Robert, I had clients pressing me for answers. The experiment needed to happen. Below I’ll lay out exactly how we arrived at our answer, but here’s the spoiler: it doesn’t. Google doesn’t crawl using the HTTP/2 protocol. If your website uses HTTP/2, you need to make sure you continue to optimize the HTTP/1.1 version for crawling purposes.

The question

It all started with a Google Hangouts in November 2015.

When asked about HTTP/2 support, John Mueller mentioned that HTTP/2-only crawling should be ready by early 2016, and he also mentioned that HTTP/2 would make it easier for Googlebot to crawl pages by bundling requests (images, JS, and CSS could be downloaded with a single bundled request).

“At the moment, Google doesn’t support HTTP/2-only crawling (…) We are working on that, I suspect it will be ready by the end of this year (2015) or early next year (2016) (…) One of the big advantages of HTTP/2 is that you can bundle requests, so if you are looking at a page and it has a bunch of embedded images, CSS, JavaScript files, theoretically you can make one request for all of those files and get everything together. So that would make it a little bit easier to crawl pages while we are rendering them for example.”

Soon after, Twitter user Kai Spriestersbach also asked about HTTP/2 support:

His clients started dropping HTTP/1.1 connections optimization, just like most developers deploying HTTP/2, which was at the time supported by all major browsers.

After a few quiet months, Google Webmasters reignited the conversation, tweeting that Google won’t hold you back if you’re setting up for HTTP/2. At this time, however, we still had no definitive word on HTTP/2-only crawling. Just because it won’t hold you back doesn’t mean it can handle it — which is why I decided to test the hypothesis.

The experiment

For months as I was following this online debate, I still received questions from our clients who no longer wanted want to spend money on HTTP/1.1 optimization. Thus, I decided to create a very simple (and bold) experiment.

I decided to disable HTTP/1.1 on my own website (https://goralewicz.com) and make it HTTP/2 only. I disabled HTTP/1.1 from March 7th until March 13th.

If you’re going to get bad news, at the very least it should come quickly. I didn’t have to wait long to see if my experiment “took.” Very shortly after disabling HTTP/1.1, I couldn’t fetch and render my website in Google Search Console; I was getting an error every time.

My website is fairly small, but I could clearly see that the crawling stats decreased after disabling HTTP/1.1. Google was no longer visiting my site.

While I could have kept going, I stopped the experiment after my website was partially de-indexed due to “Access Denied” errors.

The results

I didn’t need any more information; the proof was right there. Googlebot wasn’t supporting HTTP/2-only crawling. Should you choose to duplicate this at home with our own site, you’ll be happy to know that my site recovered very quickly.

I finally had Robert’s answer, but felt others may benefit from it as well. A few weeks after finishing my experiment, I decided to ask John about HTTP/2 crawling on Twitter and see what he had to say.

(I love that he responds.)

Knowing the results of my experiment, I have to agree with John: disabling HTTP/1 was a bad idea. However, I was seeing other developers discontinuing optimization for HTTP/1, which is why I wanted to test HTTP/2 on its own.

For those looking to run their own experiment, there are two ways of negotiating a HTTP/2 connection:

1. Over HTTP (unsecure) – Make an HTTP/1.1 request that includes an Upgrade header. This seems to be the method to which John Mueller was referring. However, it doesn’t apply to my website (because it’s served via HTTPS). What is more, this is an old-fashioned way of negotiating, not supported by modern browsers. Below is a screenshot from Caniuse.com:

2. Over HTTPS (secure) – Connection is negotiated via the ALPN protocol (HTTP/1.1 is not involved in this process). This method is preferred and widely supported by modern browsers and servers.

A recent announcement: The saga continues

Googlebot doesn’t make HTTP/2 requests

Fortunately, Ilya Grigorik, a web performance engineer at Google, let everyone peek behind the curtains at how Googlebot is crawling websites and the technology behind it:

If that wasn’t enough, Googlebot doesn’t support the WebSocket protocol. That means your server can’t send resources to Googlebot before they are requested. Supporting it wouldn’t reduce network latency and round-trips; it would simply slow everything down. Modern browsers offer many ways of loading content, including WebRTC, WebSockets, loading local content from drive, etc. However, Googlebot supports only HTTP/FTP, with or without Transport Layer Security (TLS).

Googlebot supports SPDY

During my research and after John Mueller’s feedback, I decided to consult an HTTP/2 expert. I contacted Peter Nikolow of Mobilio, and asked him to see if there were anything we could do to find the final answer regarding Googlebot’s HTTP/2 support. Not only did he provide us with help, Peter even created an experiment for us to use. Its results are pretty straightforward: Googlebot does support the SPDY protocol and Next Protocol Navigation (NPN). And thus, it can’t support HTTP/2.

Below is Peter’s response:


I performed an experiment that shows Googlebot uses SPDY protocol. Because it supports SPDY + NPN, it cannot support HTTP/2. There are many cons to continued support of SPDY:

    1. This protocol is vulnerable
    2. Google Chrome no longer supports SPDY in favor of HTTP/2
    3. Servers have been neglecting to support SPDY. Let’s examine the NGINX example: from version 1.95, they no longer support SPDY.
    4. Apache doesn’t support SPDY out of the box. You need to install mod_spdy, which is provided by Google.

To examine Googlebot and the protocols it uses, I took advantage of s_server, a tool that can debug TLS connections. I used Google Search Console Fetch and Render to send Googlebot to my website.

Here’s a screenshot from this tool showing that Googlebot is using Next Protocol Navigation (and therefore SPDY):

I’ll briefly explain how you can perform your own test. The first thing you should know is that you can’t use scripting languages (like PHP or Python) for debugging TLS handshakes. The reason for that is simple: these languages see HTTP-level data only. Instead, you should use special tools for debugging TLS handshakes, such as s_server.

Type in the console:

sudo openssl s_server -key key.pem -cert cert.pem -accept 443 -WWW -tlsextdebug -state -msg
sudo openssl s_server -key key.pem -cert cert.pem -accept 443 -www -tlsextdebug -state -msg

Please note the slight (but significant) difference between the “-WWW” and “-www” options in these commands. You can find more about their purpose in the s_server documentation.

Next, invite Googlebot to visit your site by entering the URL in Google Search Console Fetch and Render or in the Google mobile tester.

As I wrote above, there is no logical reason why Googlebot supports SPDY. This protocol is vulnerable; no modern browser supports it. Additionally, servers (including NGINX) neglect to support it. It’s just a matter of time until Googlebot will be able to crawl using HTTP/2. Just implement HTTP 1.1 + HTTP/2 support on your own server (your users will notice due to faster loading) and wait until Google is able to send requests using HTTP/2.


Summary

In November 2015, John Mueller said he expected Googlebot to crawl websites by sending HTTP/2 requests starting in early 2016. We don’t know why, as of October 2017, that hasn’t happened yet.

What we do know is that Googlebot doesn’t support HTTP/2. It still crawls by sending HTTP/ 1.1 requests. Both this experiment and the “Rendering on Google Search” page confirm it. (If you’d like to know more about the technology behind Googlebot, then you should check out what they recently shared.)

For now, it seems we have to accept the status quo. We recommended that Robert (and you readers as well) enable HTTP/2 on your websites for better performance, but continue optimizing for HTTP/ 1.1. Your visitors will notice and thank you.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

So You Want to Build a Chat Bot – Here’s How (Complete with Code!)

Posted by R0bin_L0rd

You’re busy and (depending on effective keyword targeting) you’ve come here looking for something to shave months off the process of learning to produce your own chat bot. If you’re convinced you need this and just want the how-to, skip to “What my bot does.” If you want the background on why you should be building for platforms like Google Home, Alexa, and Facebook Messenger, read on.

Why should I read this?

Do you remember when it wasn’t necessary to have a website? When most boards would scoff at the value of running a Facebook page? Now Gartner is telling us that customers will manage 85% of their relationship with brands without interacting with a human by 2020 and publications like Forbes are saying that chat bots are the cause.

The situation now is the same as every time a new platform develops: if you don’t have something your customers can access, you’re giving that medium to your competition. At the moment, an automated presence on Google Home or Slack may not be central to your strategy, but those who claim ground now could dominate it in the future.

The problem is time. Sure, it’d be ideal to be everywhere all the time, to have your brand active on every platform. But it would also be ideal to catch at least four hours sleep a night or stop covering our keyboards with three-day-old chili con carne as we eat a hasty lunch in between building two of the Next Big Things. This is where you’re fortunate in two ways;

  1. When we develop chat applications, we don’t have to worry about things like a beautiful user interface because it’s all speech or text. That’s not to say you don’t need to worry about user experience, as there are rules (and an art) to designing a good conversational back-and-forth. Amazon is actually offering some hefty prizes for outstanding examples.
  2. I’ve spent the last six months working through the steps from complete ignorance to creating a distributable chat bot and I’m giving you all my workings. In this post I break down each of the levels of complexity, from no-code back-and-forth to managing user credentials and sessions the stretch over days or months. I’m also including full code that you can adapt and pull apart as needed. I’ve commented each portion of the code explaining what it does and linking to resources where necessary.

I’ve written more about the value of Interactive Personal Assistants on the Distilled blog, so this post won’t spend any longer focusing on why you should develop chat bots. Instead, I’ll share everything I’ve learned.

What my built-from-scratch bot does

Ever since I started investigating chat bots, I was particularly interested in finding out the answer to one question: What does it take for someone with little-to-no programming experience to create one of these chat applications from scratch? Fortunately, I have direct access to someone with little-to-no experience (before February, I had no idea what Python was). And so I set about designing my own bot with the following hard conditions:


  1. It had to have some kind of real-world application. It didn’t have to be critical to a business, but it did have to bear basic user needs in mind.
  2. It had to be easily distributable across the immediate intended users, and to have reasonable scope to distribute further (modifications at most, rather than a complete rewrite).
  3. It had to be flexible enough that you, the reader, can take some free code and make your own chat bot.
  4. It had to be possible to adapt the skeleton of the process for much more complex business cases.
  5. It had to be free to run, but could have the option of paying to scale up or make life easier.
  6. It had to send messages confirming when important steps had been completed.

The resulting program is “Vietnambot,” a program that communicates with Slack, the API.AI linguistic processing platform, and Google Sheets, using real-time and asynchronous processing and its own database for storing user credentials.

If that meant nothing to you, don’t worry — I’ll define those things in a bit, and the code I’m providing is obsessively commented with explanation. The thing to remember is it does all of this to write down food orders for our favorite Vietnamese restaurant in a shared Google Sheet, probably saving tens of seconds of Distilled company time every year.

It’s deliberately mundane, but it’s designed to be a template for far more complex interactions. The idea is that whether you want to write a no-code-needed back-and-forth just through API.AI; a simple Python program that receives information, does a thing, and sends a response; or something that breaks out of the limitations of linguistic processing platforms to perform complex interactions in user sessions that can last days, this post should give you some of the puzzle pieces and point you to others.

What is API.AI and what’s it used for?

API.AI is a linguistic processing interface. It can receive text, or speech converted to text, and perform much of the comprehension for you. You can see my Distilled post for more details, but essentially, it takes the phrase “My name is Robin and I want noodles today” and splits it up into components like:

  • Intent: food_request
  • Action: process_food
  • Name: Robin
  • Food: noodles
  • Time: today

This setup means you have some hope of responding to the hundreds of thousands of ways your users could find to say the same thing. It’s your choice whether API.AI receives a message and responds to the user right away, or whether it receives a message from a user, categorizes it and sends it to your application, then waits for your application to respond before sending your application’s response back to the user who made the original request. In its simplest form, the platform has a bunch of one-click integrations and requires absolutely no code.

I’ve listed the possible levels of complexity below, but it’s worth bearing some hard limitations in mind which apply to most of these services. They cannot remember anything outside of a user session, which will automatically end after about 30 minutes, they have to do everything through what are called POST and GET requests (something you can ignore unless you’re using code), and if you do choose to have it ask your application for information before it responds to the user, you have to do everything and respond within five seconds.

What are the other things?

Slack: A text-based messaging platform designed for work (or for distracting people from work).

Google Sheets: We all know this, but just in case, it’s Excel online.

Asynchronous processing: Most of the time, one program can do one thing at a time. Even if it asks another program to do something, it normally just stops and waits for the response. Asynchronous processing is how we ask a question and continue without waiting for the answer, possibly retrieving that answer at a later time.

Database: Again, it’s likely you know this, but if not: it’s Excel that our code will use (different from the Google Sheet).

Heroku: A platform for running code online. (Important to note: I don’t work for Heroku and haven’t been paid by them. I couldn’t say that it’s the best platform, but it can be free and, as of now, it’s the one I’m most familiar with).

How easy is it?

This graph isn’t terribly scientific and it’s from the perspective of someone who’s learning much of this for the first time, so here’s an approximate breakdown:

Label

Functionality

Time it took me

1

You set up the conversation purely through API.AI or similar, no external code needed. For instance, answering set questions about contact details or opening times

Half an hour to distributable prototype

2

A program that receives information from API.AI and uses that information to update the correct cells in a Google Sheet (but can’t remember user names and can’t use the slower Google Sheets integrations)

A few weeks to distributable prototype

3

A program that remembers user names once they’ve been set and writes them to Google Sheets. Is limited to five seconds processing time by API.AI, so can’t use the slower Google Sheets integrations and may not work reliably when the app has to boot up from sleep because that takes a few seconds of your allocation*

A few weeks on top of the last prototype

4

A program that remembers user details and manages the connection between API.AI and our chosen platform (in this case, Slack) so it can break out of the five-second processing window.

A few weeks more on top of the last prototype (not including the time needed to rewrite existing structures to work with this)

*On the Heroku free plan, when your app hasn’t been used for 30 minutes it goes to sleep. This means that the first time it’s activated it takes a little while to start your process, which can be a problem if you have a short window in which to act. You could get around this by (mis)using a free “uptime monitoring service” which sends a request every so often to keep your app awake. If you choose this method, in order to avoid using all of the Heroku free hours allocation by the end of the month, you’ll need to register your card (no charge, it just gets you extra hours) and only run this application on the account. Alternatively, there are any number of companies happy to take your money to keep your app alive.

For the rest of this post, I’m going to break down each of those key steps and either give an overview of how you could achieve it, or point you in the direction of where you can find that. The code I’m giving you is Python, but as long as you can receive and respond to GET and POST requests, you can do it in pretty much whatever format you wish.


1. Design your conversation

Conversational flow is an art form in itself. Jonathan Seal, strategy director at Mando and member of British Interactive Media Association’s AI thinktank, has given some great talks on the topic. Paul Pangaro has also spoken about conversation as more than interface in multiple mediums.

Your first step is to create a flow chart of the conversation. Write out your ideal conversation, then write out the most likely ways a person might go off track and how you’d deal with them. Then go online, find existing chat bots and do everything you can to break them. Write out the most difficult, obtuse, and nonsensical responses you can. Interact with them like you’re six glasses of wine in and trying to order a lemon engraving kit, interact with them as though you’ve found charges on your card for a lemon engraver you definitely didn’t buy and you are livid, interact with them like you’re a bored teenager. At every point, write down what you tried to do to break them and what the response was, then apply that to your flow. Then get someone else to try to break your flow. Give them no information whatsoever apart from the responses you’ve written down (not even what the bot is designed for), refuse to answer any input you don’t have written down, and see how it goes. David Low, principal evangelist for Amazon Alexa, often describes the value of printing out a script and testing the back-and-forth for a conversation. As well as helping to avoid gaps, it’ll also show you where you’re dumping a huge amount of information on the user.

While “best practices” are still developing for chat bots, a common theme is that it’s not a good idea to pretend your bot is a person. Be upfront that it’s a bot — users will find out anyway. Likewise, it’s incredibly frustrating to open a chat and have no idea what to say. On text platforms, start with a welcome message making it clear you’re a bot and giving examples of things you can do. On platforms like Google Home and Amazon Alexa users will expect a program, but the “things I can do” bit is still important enough that your bot won’t be approved without this opening phase.

I’ve included a sample conversational flow for Vietnambot at the end of this post as one way to approach it, although if you have ideas for alternative conversational structures I’d be interested in reading them in the comments.

A final piece of advice on conversations: The trick here is to find organic ways of controlling the possible inputs and preparing for unexpected inputs. That being said, the Alexa evangelist team provide an example of terrible user experience in which a bank’s app said: “If you want to continue, say nine.” Quite often questions, rather than instructions, are the key.

2. Create a conversation in API.AI

API.AI has quite a lot of documentation explaining how to create programs here, so I won’t go over individual steps.

Key things to understand:

You create agents; each is basically a different program. Agents recognize intents, which are simply ways of triggering a specific response. If someone says the right things at the right time, they meet criteria you have set, fall into an intent, and get a pre-set response.

The right things to say are included in the “User says” section (screenshot below). You set either exact phrases or lists of options as the necessary input. For instance, a user could write “Of course, I’m [any name]” or “Of course, I’m [any temperature].” You could set up one intent for name-is which matches “Of course, I’m [given-name]” and another intent for temperature which matches “Of course, I’m [temperature],” and depending on whether your user writes a name or temperature in that final block you could activate either the “name-is” or “temperature-is” intent.

The “right time” is defined by contexts. Contexts help define whether an intent will be activated, but are also created by certain intents. I’ve included a screenshot below of an example interaction. In this example, the user says that they would like to go to on holiday. This activates a holiday intent and sets the holiday context you can see in input contexts below. After that, our service will have automatically responded with the question “where would you like to go?” When our user says “The” and then any location, it activates our holiday location intent because it matches both the context, and what the user says. If, on the other hand, the user had initially said “I want to go to the theater,” that might have activated the theater intent which would set a theater context — so when we ask “what area of theaters are you interested in?” and the user says “The [location]” or even just “[location],” we will take them down a completely different path of suggesting theaters rather than hotels in Rome.

The way you can create conversations without ever using external code is by using these contexts. A user might say “What times are you open?”; you could set an open-time-inquiry context. In your response, you could give the times and ask if they want the phone number to contact you. You would then make a yes/no intent which matches the context you have set, so if your user says “Yes” you respond with the number. This could be set up within an hour but gets exponentially more complex when you need to respond to specific parts of the message. For instance, if you have different shop locations and want to give the right phone number without having to write out every possible location they could say in API.AI, you’ll need to integrate with external code (see section three).

Now, there will be times when your users don’t say what you’re expecting. Excluding contexts, there are three very important ways to deal with that:

  1. Almost like keyword research — plan out as many possible variations of saying the same thing as possible, and put them all into the intent
  2. Test, test, test, test, test, test, test, test, test, test, test, test, test, test, test (when launched, every chat bot will have problems. Keep testing, keep updating, keep improving.)
  3. Fallback contexts

Fallback contexts don’t have a user says section, but can be boxed in by contexts. They match anything that has the right context but doesn’t match any of your user says. It could be tempting to use fallback intents as a catch-all. Reasoning along the lines of “This is the only thing they’ll say, so we’ll just treat it the same” is understandable, but it opens up a massive hole in the process. Fallback intents are designed to be a conversational safety net. They operate exactly the same as in a normal conversation. If a person asked what you want in your tea and you responded “I don’t want tea” and that person made a cup of tea, wrote the words “I don’t want tea” on a piece of paper, and put it in, that is not a person you’d want to interact with again. If we are using fallback intents to do anything, we need to preface it with a check. If we had to resort to it in the example above, saying “I think you asked me to add I don’t want tea to your tea. Is that right?” is clunky and robotic, but it’s a big step forward, and you can travel the rest of the way by perfecting other parts of your conversation.

3. Integrating with external code

I used Heroku to build my app . Using this excellent weather webhook example you can actually deploy a bot to Heroku within minutes. I found this example particularly useful as something I could pick apart to make my own call and response program. The weather webhook takes the information and calls a yahoo app, but ignoring that specific functionality you essentially need the following if you’re working in Python:

#start
    req = request.get_json
    print("Request:")
    print(json.dumps(req, indent=4))
#process to do your thing and decide what response should be

    res = processRequest(req)
# Response we should receive from processRequest (you’ll need to write some code called processRequest and make it return the below, the weather webhook example above is a good one).
{
        "speech": “speech we want to send back”,
        "displayText": “display text we want to send back, usually matches speech”,
        "source": "your app name"
    }

# Making our response readable by API.AI and sending it back to the servic

 response = make_response(res)
    response.headers['Content-Type'] = 'application/json'
    return response
# End

As long as you can receive and respond to requests like that (or in the equivalent for languages other than Python), your app and API.AI should both understand each other perfectly — what you do in the interim to change the world or make your response is entirely up to you. The main code I have included is a little different from this because it’s also designed to be the step in-between Slack and API.AI. However, I have heavily commented sections like like process_food and the database interaction processes, with both explanation and reading sources. Those comments should help you make it your own. If you want to repurpose my program to work within that five-second window, I would forget about the file called app.py and aim to copy whole processes from tasks.py, paste them into a program based on the weatherhook example above, and go from there.

Initially I’d recommend trying GSpread to make some changes to a test spreadsheet. That way you’ll get visible feedback on how well your application is running (you’ll need to go through the authorization steps as they are explained here).

4. Using a database

Databases are pretty easy to set up in Heroku. I chose the Postgres add-on (you just need to authenticate your account with a card; it won’t charge you anything and then you just click to install). In the import section of my code I’ve included links to useful resources which helped me figure out how to get the database up and running — for example, this blog post.

I used the Python library Psycopg2 to interact with the database. To steal some examples of using it in code, have a look at the section entitled “synchronous functions” in either the app.py or tasks.py files. Open_db_connection and close_db_connection do exactly what they say on the tin (open and close the connection with the database). You tell check_database to check a specific column for a specific user and it gives you the value, while update_columns adds a value to specified columns for a certain user record. Where things haven’t worked straightaway, I’ve included links to the pages where I found my solution. One thing to bear in mind is that I’ve used a way of including columns as a variable, which Psycopg2 recommends quite strongly against. I’ve gotten away with it so far because I’m always writing out the specific column names elsewhere — I’m just using that method as a short cut.

5. Processing outside of API.AI’s five-second window

It needs to be said that this step complicates things by no small amount. It also makes it harder to integrate with different applications. Rather than flicking a switch to roll out through API.AI, you have to write the code that interprets authentication and user-specific messages for each platform you’re integrating with. What’s more, spoken-only platforms like Google Home and Amazon Alexa don’t allow for this kind of circumvention of the rules — you have to sit within that 5–8 second window, so this method removes those options. The only reasons you should need to take the integration away from API.AI are:

  • You want to use it to work with a platform that it doesn’t have an integration with. It currently has 14 integrations including Facebook Messenger, Twitter, Slack, and Google Home. It also allows exporting your conversations in an Amazon Alexa-understandable format (Amazon has their own similar interface and a bunch of instructions on how to build a skill — here is an example.
  • You are processing masses of information. I’m talking really large amounts. Some flight comparison sites have had problems fitting within the timeout limit of these platforms, but if you aren’t trying to process every detail for every flight for the next 12 months and it’s taking more than five seconds, it’s probably going to be easier to make your code more efficient than work outside the window. Even if you are, those same flight comparison sites solved the problem by creating a process that regularly checks their full data set and creates a smaller pool of information that’s more quickly accessible.
  • You need to send multiple follow-up messages to your user. When using the API.AI integration it’s pretty much call-and-response; you don’t always get access to things like authorization tokens, which are what some messaging platforms require before you can automatically send messages to one of their users.
  • You’re working with another program that can be quite slow, or there are technical limitations to your setup. This one applies to Vietnambot, I used the GSpread library in my application, which is fantastic but can be slow to pull out bigger chunks of data. What’s more, Heroku can take a little while to start up if you’re not paying.

I could have paid or cut out some of the functionality to avoid needing to manage this part of the process, but that would have failed to meet number 4 in our original conditions: It had to be possible to adapt the skeleton of the process for much more complex business cases. If you decide you’d rather use my program within that five-second window, skip back to section 2 of this post. Otherwise, keep reading.

When we break out of the five-second API.AI window, we have to do a couple of things. First thing is to flip the process on its head.

What we were doing before:

User sends message -> API.AI -> our process -> API.AI -> user

What we need to do now:

User sends message -> our process -> API.AI -> our process -> user

Instead of API.AI waiting while we do our processing, we do some processing, wait for API.AI to categorize the message from us, do a bit more processing, then message the user.

The way this applies to Vietnambot is:

  1. User says “I want [food]”
  2. Slack sends a message to my app on Heroku
  3. My app sends a “swift and confident” 200 response to Slack to prevent it from resending the message. To send the response, my process has to shut down, so before it does that, it activates a secondary process using “tasks.”
  4. The secondary process takes the query text and sends it to API.AI, then gets back the response.
  5. The secondary process checks our database for a user name. If we don’t have one saved, it sends another request to API.AI, putting it in the “we don’t have a name” context, and sends a message to our user asking for their name. That way, when our user responds with their name, API.AI is already primed to interpret it correctly because we’ve set the right context (see section 1 of this post). API.AI tells us that the latest message is a user name and we save it. When we have both the user name and food (whether we’ve just got it from the database or just saved it to the database), Vietnambot adds the order to our sheet, calculates whether we’ve reached the order minimum for that day, and sends a final success message.

6. Integrating with Slack

This won’t be the same as integrating with other messaging services, but it could give some insight into what might be required elsewhere. Slack has two authorization processes; we’ll call one “challenge” and the other “authentication.”

Slack includes instructions for an app lifecycle here, but API.AI actually has excellent instructions for how to set up your app; as a first step, create a simple back-and-forth conversation in API.AI (not your full product), go to integrations, switch on Slack, and run through the steps to set it up. Once that is up and working, you’ll need to change the OAuth URL and the Events URL to be the URL for your app.

Thanks to github user karishay, my app code includes a process for responding to the challenge process (which will tell Slack you’re set up to receive events) and for running through the authentication process, using our established database to save important user tokens. There’s also the option to save them to a Google Sheet if you haven’t got the database established yet. However, be wary of this as anything other than a first step — user tokens give an app a lot of power and have to be guarded carefully.

7. Asynchronous processing

We are running our app using Flask, which is basically a whole bunch of code we can call upon to deal with things like receiving requests for information over the internet. In order to create a secondary worker process I’ve used Redis and Celery. Redis is our “message broker”; it makes makes a list of everything we want our secondary process to do. Celery runs through that list and makes our worker process do those tasks in sequence. Redis is a note left on the fridge telling you to do your washing and take out the bins, while Celery is the housemate that bangs on your bedroom door, note in hand, and makes you do each thing. I’m sure our worker process doesn’t like Celery very much, but it’s really useful for us.

You can find instructions for adding Redis to your app in Heroku here and you can find advice on setting up Celery in Heroku here. Miguel Grinberg’s Using Celery with Flask blog post is also an excellent resource, but using the exact setup he gives results in a clash with our database, so it’s easier to stick with the Heroku version.

Up until this point, we’ve been calling functions in our main app — anything of the form function_name(argument_1, argument_2, argument_3). Now, by putting “tasks.” in front of our function, we’re saying “don’t do this now — hand it to the secondary process.” That’s because we’ve done a few things:

  • We’ve created tasks.py which is the secondary process. Basically it’s just one big, long function that our main code tells to run.
  • In tasks.py we’ve included Celery in our imports and set our app as celery.Celery(), meaning that when we use “app” later we’re essentially saying “this is part of our Celery jobs list” or rather “tasks.py will only do anything when its flatmate Celery comes banging on the door”
  • For every time our main process asks for an asynchronous function by writing tasks.any_function_name(), we have created that function in our secondary program just as we would if it were in the same file. However in our secondary program we’ve prefaced with “@app.task”, another way of saying “Do wash_the_dishes when Celery comes banging the door yelling wash_the_dishes(dishes, water, heat, resentment)”.
  • In our “procfile” (included as a file in my code) we have listed our worker process as –app=tasks.app

All this adds up to the following process:

  1. Main program runs until it hits an asynchronous function
  2. Main program fires off a message to Redis which has a list of work to be done. The main process doesn’t wait, it just runs through everything after it and in our case even shuts down
  3. The Celery part of our worker program goes to Redis and checks for the latest update, it checks what function has been called (because our worker functions are named the same as when our main process called them), it gives our worker all the information to start doing that thing and tells it to get going
  4. Our worker process starts the action it has been told to do, then shuts down.

As with the other topics mentioned here, I’ve included all of this in the code I’ve supplied, along with many of the sources used to gather the information — so feel free to use the processes I have. Also feel free to improve on them; as I said, the value of this investigation was that I am not a coder. Any suggestions for tweaks or improvements to the code are very much welcome.


Conclusion

As I mentioned in the introduction to this post, there’s huge opportunity for individuals and organizations to gain ground by creating conversational interactions for the general public. For the vast majority of cases you could be up and running in a few hours to a few days, depending on how complex you want your interactions to be and how comfortable you are with coding languages. There are some stumbling blocks out there, but hopefully this post and my obsessively annotated code can act as templates and signposts to help get you on your way.

Grab my code at GitHub


Bonus #1: The conversational flow for my chat bot

This is by no means necessarily the best or only way to approach this interaction. This is designed to be as streamlined an interaction as possible, but we’re also working within the restrictions of the platform and the time investment necessary to produce this. Common wisdom is to create the flow of your conversation and then keep testing to perfect, so consider this example layout a step in that process. I’d also recommend putting one of these flow charts together before starting — otherwise you could find yourself having to redo a bunch of work to accommodate a better back-and-forth.

Bonus #2: General things I learned putting this together

As I mentioned above, this has been a project of going from complete ignorance of coding to slightly less ignorance. I am not a professional coder, but I found the following things I picked up to be hugely useful while I was starting out.

  1. Comment everything. You’ll probably see my code is bordering on excessive commenting (anything after a # is a comment). While normally I’m sure someone wouldn’t want to include a bunch of Stack Overflow links in their code, I found notes about what things portions of code were trying to do, and where I got the reasoning from, hugely helpful as I tried to wrap my head around it all.
  2. Print everything. In Python, everything within “print()” will be printed out in the app logs (see the commands tip for reading them in Heroku). While printing each action can mean you fill up a logging window terribly quickly (I started using the Heroku add-on LogDNA towards the end and it’s a huge step up in terms of ease of reading and length of history), often the times my app was falling over was because one specific function wasn’t getting what it needed, or because of another stupid typo. Having a semi-constant stream of actions and outputs logged meant I could find the fault much more quickly. My next step would probably be to introduce a way of easily switching on and off the less necessary print functions.
  3. The following commands: Heroku’s how-to documentation for creating an app and adding code is pretty great, but I found myself using these all the time so thought I’d share (all of the below are written in the command line; type cmd in on Windows or by running Terminal on a Mac):
    1. CD “””[file location]””” - select the file your code is in
    2. “git init” – create a git file to add to
    3. “git add .” – add all of the code in your file into the file that git will put online
    4. “git commit -m “[description of what you’re doing]” “ - save the data in your git file
    5. “heroku git:remote -a [the name of your app]” – select your app as where to put the code
    6. “git push heroku master” - send your code to the app you selected
    7. “heroku ps” – find out whether your app is running or crashed
    8. “heroku logs” – apologize to your other half for going totally unresponsive for the last ten minutes and start the process of working through your printouts to see what has gone wrong
  4. POST requests will always wait for a response. Seems really basic — initially I thought that by just sending a POST request and not telling my application to wait for a response I’d be able to basically hot-potato work around and not worry about having to finish what I was doing. That’s not how it works in general, and it’s more of a symbol of my naivete in programming than anything else.
  5. If something is really difficult, it’s very likely you’re doing it wrong.
    While I made sure to do pretty much all of the actual work myself (to
    avoid simply farming it out to the very talented individuals at
    Distilled), I was lucky enough to get some really valuable advice. The
    piece of advice above was from Dominic Woodman, and I should have
    listened to it more. The times when I made least progress were when I
    was trying to use things the way they shouldn’t be used. Even when I
    broke through those walls, I later found that someone didn’t want me to
    use it that way because it would completely fail at a later point.
    Tactical retreat
    is an option. (At this point, I should mention he wasn’t
    the only one to give invaluable advice; Austin, Tom, and Duncan of the
    Distilled R&D team were a huge help.)

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

The Voice Playbook – Building a Marketing Plan for the Next Era in Computing

Posted by SimonPenson

Preface

This post serves a dual purpose: it’s a practical guide to the realities of preparing for voice right now, but equally it’s a rallying call to ensure our industry has a full understanding of just how big, disruptive, and transformational it will be — and that, as a result, we need to stand ready.

My view is that voice is not just an add-on, but an entirely new way of interacting with the machines that add value to our lives. It is the next big era of computing.

Brands and agencies alike need to be at the forefront of that revolution. For my part, that begins with investing in the creation of a voice team.

Let me explain just how we plan to do that, and why it’s being actioned earlier than many will think necessary….

Jump to a section:

Why is voice so important?
When is it coming in a big way?
Who are the big players?
Where do voice assistants get their data from?
How do I shape my strategy and tactics to get involved?
What skill sets do I need in a “voice team?”

Introduction

“The times, they are a-changing.”
– Bob Dylan

Back in 1964, that revered folk-and-blues singer could never have imagined just what that would mean in the 21st century.

As we head into 2018, we’re nearing a voice interface-inspired inflection point the likes of which we haven’t seen before. And if the world’s most respected futurist is to be believed, it’s only just beginning.

Talk to Ray Kurzweil, Google’s Chief Engineer and the man Bill Gates says is the “best person to predict the future,” and he’ll tell you that we are entering a period of huge technological change.

For those working across search and many other areas of digital marketing, change is not uncommon. Seismic events, such as the initial roll out of Panda and Penguin, reminded those inside it just how painful it is to be unprepared for the future.

At best, it tips everything upside down. At worst, it kills those agencies or businesses stuck behind the curve.

It’s for exactly this reason that I felt compelled to write a post all about why I’m building a voice team at Zazzle Media, the agency I founded here in the UK, as stats from BrightEdge reveal that 62% of marketers still have no plans whatsoever to prepare for the coming age of voice.

I’m also here to argue that while the growth traditional search agencies saw through the early 2000s is over, similar levels of expansion are up for grabs again for those able to seamlessly integrate voice strategies into an offering focused on the client or customer.

Winter is coming!

Based on our current understanding of technological progress, it’s easy to rest on our laurels. Voice interface adoption is still in its very early stages. Moore’s Law draws a (relatively) linear line through technological advancement, giving us time to take our positions — but that era is now behind us.

According to Kurzweil’s thesis on the growth of technology (the Law of Accelerating Returns),

“we won’t experience 100 years of progress in the 21st century – it will be more like 20,000 years.”

Put another way, he explains that technology does not progress in a linear way. Instead, it progresses exponentially.

“30 steps linearly get you to 30. One, two, three, four, step 30 you’re at 30. With exponential growth, it’s one, two, four, eight. Step 30, you’re at a billion,” he explained in a recent Financial Times interview.

In other words, we’re going to see new tech landing and gaining traction faster than we ever realized it possible, as this chart proves:

Above, Kurzweil illustrates how we’ll be able to produce computational power as powerful as a human brain by 2023. By 2037 we’ll be able to do it for less than a one-cent cost. Just 15 years later computers will be more powerful than the entire human race as a whole. Powerful stuff — and proof of the need for action as voice and the wider AI paradigm takes hold.

Voice

So, what does that mean right now? While many believe voice is still a long ways off, one point of view says it’s already here — and those fast enough to grab the opportunity will grow exponentially with it. Indeed, Google itself says more than 20% of all searches are already voice-led, and will reach 50% by 2020.

Let’s first deal with understanding the processes required before then moving onto the expertise to make it happen.

What do we need to know?

We’ll start with some assumptions. If you are reading this post, you already have a good understanding of the basics of voice technology. Competitors are joining the race every day, but right now the key players are:

  • Microsoft Cortana – Available on Windows, iOS, and Android.
  • Amazon AlexaVoice-activated assistant that lives on Amazon audio gear (Echo, Echo Dot, Tap) and Fire TV.
  • Google Assistant – Google’s voice assistant powers Google Home as well as sitting across its mobile and voice search capabilities.
  • Apple Siri – Native voice assistant for all Apple products.

And (major assistants) coming soon:

All of these exist to allow consumers the ability to retrieve information without having to touch a screen or type anything.

That has major ramifications for those who rely on traditional typed search and a plethora of other arenas, such as the fast-growing Internet of Things (IoT).

In short, voice allows us to access everything from our personal diaries and shopping lists to answers to our latest questions and even to switch our lights off.

Why now?

Apart from the tidal wave of tech now supporting voice, there is another key reason for investing in voice now — and it’s all to do with the pace at which voice is actually improving.

In a recent Internet usage study by KPCB, Andrew NG, chief scientist at Chinese search engine Baidu, was asked what it was going to take to push voice out of the shadows and into its place as the primary interface for computing.

His point was that at present, voice is “only 90% accurate” and therefore the results are sometimes a little disappointing. This slows uptake.

But he sees that changing soon, explaining that “As speech recognition accuracy goes from, say, 95% to 99%, all of us in the room will go from barely using it today to using it all the time. Most people underestimate the difference between 95% and 99% accuracy — 99% is a game changer… “

When will that happen? In the chart below we see Google’s view on this question, predicting we will be there in 2018!

Is this the end for search?

It is also important to point out that voice is an additional interface and will not replace any of those that have gone before it. We only need to look back at history to see how print, radio, and TV continue to play a part in our lives alongside the latest information interfaces.

Moz founder Rand Fishkin made this point in a recent WBF, explaining that while voice search volumes may well overtake typed terms, the demand for traditional SERP results and typed results will continue to grow also, simply because of the growing use of search.

The key will be creating a channel strategy as well as a method for researching both voice and typed opportunity as part of your overall process.

What’s different?

The key difference when considering voice opportunity is to think about the conversational nature that the interface allows. For years we’ve been used to having to type more succinctly in order to get answers quickly, but voice does away with that requirement.

Instead, we are presented with an opportunity to ask, find, and discover the things we want and need using natural language.

This means that we will naturally lengthen the phrases we use to find the stuff we want — and early studies support this assumption.

In a study by Microsoft and covered by the brilliant Purna Virji in this Moz post from last year, we can see a clear distinction between typed and voice search phrase length, even at this early stage of conversational search. Expect this to grow as we get used to interacting with voice.

The evidence suggests that will happen fast too. Google’s own data shows us that 55% of teens and 40% of adults use voice search daily. Below is what they use it for:

While it is easy to believe that voice only extends to search, it’s important to remember that the opportunity is actually much wider. Below we can see results from a major 2016 Internet usage study into how voice is being used:

Clearly, the lion’s share is related to search and information retrieval, with more than 50% of actions relating to finding something local to go/see/do (usually on mobile) or using voice as an interface to search.

But an area sure to grow is the leisure/entertainment sector. More on that later.

The key question remains: How exactly do you tap into this growing demand? How do you become the choice answer above all those you compete with?

With such a vast array of devices, the answer is a multi-faceted one.

Where is the data coming from?

To answer the questions above, we must first understand where the information is being accessed from and the answer, predictably, is not a simple one. Understanding it, however, is critical if you are to build a world-class voice marketing strategy.

To make life a little easier, I’ve created an at-a-glance cheat sheet to guide you through the process. You can download it by clicking on the banner below.

In it, you’ll find an easy-to-follow table explaining where each of the major voice assistants (Siri, Cortana, Google Assistant, and Alexa) retrieve their data from so you can devise a plan to cover them all.

The key take away from that research? Interestingly, Bing has every opportunity to steal a big chunk of market share from Google and, at least at present, is the key search engine to optimize for if voice “visibility” is the objective.

Bing is more important now.

Of all the Big Four in voice, three (Cortana, Siri, and Alexa) default to Bing search for general information retrieval. Given that Facebook (also a former Bing search partner) is also joining the fray, Google could soon find itself in a place it’s not entirely used to being: alone.

Now, the search giant usually finds a way to pull back market share, but for now a marketers’ focus should be on Microsoft’s search engine and Google as a secondary player.

Irrespective of which engine you prioritize there are two key areas to focus on: featured snippets and local listings.

Featured snippets

The search world has been awash with posts and talks on this area of optimization over recent months as Google continues to push ahead with the roll out of the feature-rich SERP real estate.

For those that don’t know what a “snippet” is, there’s an example below, shown for a search for “how do I get to sleep”:

Not only is this incredibly valuable traditional search real estate (as I’ve discussed in an earlier blog post), but it’s a huge asset in the fight for voice visibility.

Initial research by experts such as Dr. Pete Myers tells us, clearly, that Google assistant is pulling its answers from snippet content for anything with any level of complexity.

Simple answers — such as those for searches about sports results, the weather, and so forth — are answered directly. But for those that require expertise it defaults to site content, explaining where that information came from.

At present, it’s unclear how Google plans to help us understand and attribute these kinds of visits. But according to insider Gary Illyes, it is imminent within Search Console.

Measurement will clearly be an important step in selling any voice strategy proposal upwards and to provide individual site or brand evidence that the medium is growing and deserving of investment.

User intent and purchase

Such data will also help us understand how voice alters such things as the traditional conversion funnel and the propensity to purchase.

We know how important content is in the traditional user journey, but how will it differ in the voice world? There’s sure to be a rewrite of many rules we’ve come to know well from the “typed Internet.”

Applying some level of logic to the challenge, it’s clear that there’s a greater degree of value in searches showing some level of immediacy, i.e. people searching through home assistants or mobiles for the location of something or time and/or date of the same thing.

Whereas with typed search we see greater value in simple phrases that we call “head terms,” the world is much more complex in voice. Below we see a breakdown of words that will trigger searches in voice:

To better understand this, let’s examine a potential search “conversation.”

If we take a product search example for, let’s say, buying a new lawn mower, the conversation could go a little like this:

[me] What’s the best rotary lawn mower for under £500?

[voice assistant] According to Lawn Mower Hut there are six choices [reads out choices]

Initially, voice will struggle to understand how to move to the next logical question, such as:

[voice assistant] Would you like a rotary or cylinder lawn mower?


Or, better still…

[voice assistant] Is your lawn perfectly flat?


[me] No.


[voice assistant] OK, may I suggest a rotary mower? If so then you have two choices, the McCulloch M46-125WR or the BMC Lawn Racer.

In this scenario, our voice assistant has connected the dots and asks the next relevant question to help narrow the search in a natural way.

Natural language processing

To do this, however, requires a step up in computer processing, a challenge being worked on as we speak in a bid to provide the next level of voice search.

To solve the challenge requires the use of so-called Deep Neural Networks (DNNs), interconnected layers of processing units designed to mimic the neural networks in the brain.

DNNs can work across everything from speech, images, sequences of words, and even location before then classifying them into categories.

It relies on the input of truckloads of data so it can learn how best to bucket those things. That data pile will grow exponentially as the adoption of voice accelerates.

What that will mean is that voice assistants can converse with us in the same way as a clued-up shop assistant, further negating the need for in-store visits in the future and a much more streamlined research process.

In this world, we start to paint a very different view of the “keywords” we should be targeting, with deeper and more exacting phrases winning the battle for eyeballs.

As a result, the long tail’s rise in prominence continues at pace, and data-driven content strategies really do move to the center of the marketing plan as the reward for creating really specific content increases.

We also see a greater emphasis placed on keywords that may not be on top of the priority list currently. If we continue to work through our examples, we can start to paint a picture of how this plays out…

In our lawnmower purchase example, we’re at a stage where two options have been presented to us (the McCulloch and the BMC Racer). In a voice 1.0 scenario, where we have yet to see DNNs develop enough to know the next relevant question and answer, we might ask:

[me] Which has the best reviews?

And the answer may be tied to a 3rd party review conclusion, such as…

[voice assistant] According to Trustpilot, the McCulloch has a 4.5-star rating versus a 3.5-star rating for the BMC lawn mower.

Suddenly, 3rd party reviews become more valuable than ever as a conversion optimization opportunity, or a strategy that includes creating content to own the SERP for a keyword phrase that includes “review” or “top rated.”

And where would we naturally go from here? The options are either directly to conversion, via some kind of value-led search (think “cheapest McCulloch M46-125W”), or to a location-based one (“nearest shop with a McCulloch M46-125WR”) to allow me to give it a “test drive.”

Keyword prioritization

This single journey gives us some insight into how the interface could shape our thinking on keyword prioritization and content creation.

Pieces that help a user either make a decision or perform an action around the following trigger words and phrases will attract greater interest and traffic from voice. Examples could include:

  • buy
  • get
  • find
  • top rated
  • closest
  • nearest
  • cheapest
  • best deal

Many are not dissimilar to typed search, but clearly intent priorities change. The aforementioned Microsoft study also looked at how this may work, suggesting the following order of question types and their association with purchase/action:

Local opportunity

This also pushes the requirement for serious location-based marketing investment much higher up the pecking order.

We can clearly see how important such searches become from a “propensity to buy/take action” perspective.

It pays to invest more in ensuring the basics are covered, for which the Moz Local Search Ranking Factors study can be a huge help, but also in putting some weight behind efforts across Bing Places. If you are not yet set up fully over there, this simple guide can help.

Local doesn’t start and end with set up, of course. To maximize visibility there must be an ongoing local marketing plan that covers not just the technical elements of search but also wider marketing actions that will be picked up by voice assistants.

We already know, for instance, that engagement factors are playing a larger part of the algorithmic mix for local, but our understanding of what that really means may be limited.

Engagement is not just a social metric but a real world one. Google, for instance, knows not just what you search for but where you go (via location tracking and beacon data), what you watch (via YouTube), the things you are interested in, and where you go (via things such as Flight search and Map data). We need to leverage each of these data points to maximize effect.

As a good example of this in action, we mentioned review importance earlier. Here it plays a significant part of the local plan. A proactive review acquisition strategy is really important, so look to build this into your everyday activity by proactively incentivizing visitors to leave them. This involves actively monitoring on all the key review sites, not just your favorite!

Use your email strategy to drive this behavior as well by ensuring that newsletters and offer emails support the overall local plan.

And a local social strategy is also important. Get to know your best customers and most local visitors and turn them into evangelists.

Doing it is easier than you might think; you can use Twitter mention monitoring not only to search for key terms, but also mentions within specific latitude/longitude settings or radius.

Advanced search also allows you to discover tweets by location or mentioning location. This can be helpful as research to discover the local questions being asked.

The awesome team at Zapier covered this topic in lots of detail recently, so for those who want to action this particular point I highly recommend reading this post.

Let’s go deeper

There is new thinking needed if the opportunity is to be maximized. To understand this, we need to go back to our user journey thought process.

For starters, there’s the Yelp/Alexa integration. While the initial reaction may be simply to optimize listings for the site, the point is actually a wider one.

Knowing that many of the key vertical search engines (think Skyscanner [travel], Yelp [local], etc.) will spend big to ensure they have the lion’s share of voice market, it will pay to spend time improving your content on these sites.

Which is most important will be entirely dependent upon what niche you are working in. Many will only offer limited opportunity for optimization, but being there and spending time ensuring your profile is 110% will be key. It may even pay to take sponsored opportunities within them for the added visibility it may give you in the future.

There’s also the really interesting intellectual challenge of attempting to map out as many potential user journeys as possible to and from your business.

Let’s take our lawnmower analogy again, but this time from the perspective of a retailer situated within 20 miles of the searcher. In this scenario, we need to think about how we might be able to get front and center before anyone else if we stock the McCulloch model they are looking for.

If we take it as a given that we’ve covered the essentials, then we need to think more laterally.

It’s natural to not only look for a local outlet that stocks the right model, but when it may be open. We might also ask more specific questions like whether they have parking, or even if they are busy at specific times or offer appointments.

The latter would be a logical step, especially for businesses that work in this way; think dentists, doctors, beauty salons, and even trades. The opportunity to book a plumber at a specific time via voice would be a game changer for those set up to offer it.

Know your locality

As a local business, it is also imperative that you know the surrounding areas well and to be able to prove you’ve thought about it. This includes looking at how people talk about key landmarks from a voice perspective.

We often use slang or shortened versions of landmark naming conventions, for instance. In a natural, conversational setting, you may find that you miss out if you don’t use those idiosyncrasies within the content you produce and feature on your site or within your app.

Fun and entertainment

Then, of course, comes the “fun.” Think of it as the games section of the App Store — it makes little logical sense, but in it lies a whole industry of epic proportions.

Voice will give birth to the next era in entertainment. While some of you may be thinking about how to profit from such an active audience, the majority of brands would be smart to see it as an engagement and brand awareness world.

Game makers will clamber to create hit mind games and quizzes, but those that play around the edges may well be the monarchs of this opportunity. Think about how voice could change the dynamic for educators, play the part of unbiased referees in games, or teach birdsong and the birds to which they relate. The opportunity is endless — and it will claim 25% of the overall pie, according to current usage research.

The monetization methods are yet to be uncovered, but the advertising opportunity is significant, as well as how clever technology like Blockchain may enable frictionless payments and more.

User journey mapping

So how do you tie all of this together into a seamless plan, given the complexity and number of touch points available? The answer starts and ends with user journey mapping.

This is something I find myself doing more and more now as part of the wider marketing challenge. Fragmented audiences and a plethora of devices and technology mean it’s more difficult than ever to build an integrated strategy. Taking a user-centric approach is the only way to make sense of the chaos.

Voice is no different, and the key differentiator here is the fact that in this new world a journey is actually a conversation (or a series of them).

Conversation journey mapping

While the tech may not yet be there to support conversations in voice, given the point at the beginning of this piece around the law of Accelerating Returns, it’s clear that it’s coming — and faster than we realize.

In some respects, the timing of that advancement is irrelevant, however, as the process of working through a series of conversations that a potential client or customer may have around your product or service is invaluable as research for your plan.

To go back to our lawnmower example, a conversation mapping exercise may look a little like this:

[me] What’s the best lawnmower for under £500?

[voice assistant] How large is your lawn?

[me] It’s not very big. I don’t need a ride-on.
[voice assistant] OK so would you prefer a cylinder or rotary version?

[me] I don’t know. How do I choose?

[voice assistant] If you want stripes and your lawn is very flat, a cylinder gives a better finish. If not, a rotary is better.

[me] OK, definitely a rotary then!

[voice assistant] Good choice. In that case, your best options are either the McCulloch M46-125WR or the BMC Lawn Racer.

[me] Which is best?

[voice assistant] According to Trustpilot, the McCulloch has 4.5 stars from 36 reviews versus 3.5 stars for the BMC. The McCulloch is also cheaper. Do you want me to find the best deal or somewhere nearby that stocks it?

[me] I’d like to see it before buying if possible.

[voice assistant] OK, ABC Lawn Products is 12 miles away and has an appointment at 11am. Do you want to book it?

[me] Perfect.

Where are the content or optimization opportunities?

Look carefully above and you’ll see that there are huge swathes of the conversation that lend themselves to opportunity, either through content creation or some other kind of optimization.

To spell that out, here’s a possible list:

  • Guide – Best lawnmower for £500
  • Guide – Rotary versus cylinder lawnmowers
  • Review strategy – Create a plan to collect more reviews
  • Optimization – Evergreen guide optimization strategy to enhance featured snippet opportunities
  • Local search – Optimize business listing to include reviews, opening times, and more
  • Appointments – Open up an online appointment system and optimize for voice

In developing such a roadmap, it’s also important to consider the context within which the conversation is happening.

Few of us will ever feel entirely comfortable using voice in a crowded, public setting, for instance. We’re not going to try using voice on a bus, train, or at a festival anytime soon.

Instead, voice interfaces will be used in private, most likely in places such as homes and cars and places where it’s useful to be able to do multiple things at once.

Setting the scene in this way will help as you define your conversation possibilities and the optimization opportunities from it.

What people do we need to create all this?

The one missing piece of the jigsaw as we prepare for the shift to voice? People.

All of the above require a great deal of work to perfect and implement, and while the dust still needs to clear on the specifics of voice marketing, there are certain skill sets that will need to pull together to deliver a cohesive strategy.

For the majority, this will simply mean creating project groups from existing team members. But for those with the biggest opportunities (think recipe sites, large vertical search plays, and so on), it may be that a standalone team is necessary.

Here’s my take on what that team will require:

  • Developer – with specific skill in creating Google Home Actions, Alexa Skills, and so on.
  • Researcher – to work with customer groups to understand how voice is being used and capture further opportunities for development.
  • SEO – to help prioritize content creation and how it’s structured and optimized.
  • Writer – to build out the long-tail content and guides necessary.
  • Voice UX expert – A specialist in running conversation mapping sessions and turning them into brilliant user journeys for the different content and platforms your brand utilizes.

Conclusion

If you’ve read to this point, you at least have an active interest in this fast-moving area of tech. We know from the minds of the most informed experts that voice is developing quickly and that it clearly offers significant benefits to its users.

When those two key things combine, alongside a lowering cost to the technology needed to access it, it creates a tipping point that only ends one way: in the birth of a new era for computing.

Such a thing has massive connotations for both digital and wider marketing, and it will pay to have first-mover advantage.

That means educating upwards and beginning the conversation around how voice interfaces may change your own industry in the future. Once you have that running, who knows where it might lead you?

For some, it changes little, for others everything, and the good news for search marketers is that there are a lot of existing tactics and skill sets that will have an even bigger part to play.

Existing skills

  • The ability to claim featured snippets and answer boxes becomes even more rewarding as they trigger millions of voice searches.
  • Keyword research has a wider role in forming strategies to reach into voice and outside traditional search, as marketers become more interested in the natural language their audiences are using.
  • Local SEO wins become wider than simply appearing in a search engine.
  • Micro-moments become more numerous and even more specific than ever before. Research to uncover these becomes even more pivotal.

New opportunities to consider

  • Increases in content consumption through further integration in daily life — so think about what other kinds of content you can deliver to capture them.
  • Think Internet of Things integration and how your brand may be able to provide content for those devices or to help people use connected home.
  • Look at what Skills/Actions you can create to play in the “leisure and entertainment” sector of voice. This may be as much about an engagement/awareness play than pure conversion or sales, but it’s going to be a huge market. Think quick games, amazing facts, jokes, and more…
  • Conversation journey mapping is a powerful new skill to be learned and implemented to tie all content together.

Here’s to the next 50 years of voice interface progress!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Make the Most of Your MozCon 2017 Adventure – A Seattle How-To

Posted by Danielle_Launders

There’s a little secret we keep here in Seattle: it doesn’t actually rain all the time (we just want people to think that so we can keep the beautiful summers all to ourselves). Those of you who have been to a MozCon before are in on that secret; those of you who are joining us for MozCon 2017 on July 17–19 will soon find out!

It can be hard coming to a new city and trying to find food and experiences off the beaten path, which is why Mozzers have come together to share some of their favorite places, both new and old, to help you make the most of your time in Seattle this summer. If you don’t have your ticket and don’t want to miss out on all the fun, grab yours now — they’re selling out!

Buy my MozCon 2017 ticket

Unfamiliar with MozCon and not sure what you’ll learn? Scope out the full agenda with all the juicy details on who’s speaking and what topics we’re covering.

Official MozCon activities

We want you to enjoy yourself, make new industry friends, and get the most out of your MozCon experience — which is why we have an assortment of events and activities to keep you busy.

Monday night #MozCrawl

Monday night is all about exploring and making new friends. Join us from 7–10pm for our annual #MozCrawl. This year we’re bringing it back to the Capitol Hill neighborhood! Get to know your fellow attendees and our six MozCon partners hosting the fun. You’ll be able to go at your own pace and in any order.

Bonus points: have your MozCon Passport stamped at all of the stops and enter our drawing to win a ticket to MozCon 2018.

Capitol Cider hosted by Klipfolio

Linda’s Tavern hosted by WordStream

The Runaway hosted by CallRail

Stout hosted by Jumpshot

Unicorn hosted by BuzzStream

Saint John’s Bar & Eatery hosted by Moz

Tuesday night MozCon Ignite

You’ll definitely laugh, you’ll likely cry, and most importantly you’ll enjoy yourself at MozCon Ignite. Listen to twelve of your fellow attendees share their journeys, life lessons, and unique hobbies in our five-minute Ignite-style passion talk series. MozCon Ignite will take place at Benaroya Hall from 7–10pm, where you’ll have time to relax, unwind, and mingle.

  • My Life with Guinea Pigs with Britt Kemp at Bishop Fox
  • A Disastrous Camping Trip with the Best Partner with JR Ridley at Go Fish Digital
  • My Wife, Actually: A Story of Being Gay Enough with Joy Brandon at Nebo Agency
  • Homebrewing 101: A 5-minute Primer on DIY Alcohol with Erin McCaul at Moz
  • This Too Shall Pass: The Blessing of Perspective with Yosef Silver at Search Interactions
  • The King of Swing: A Guide to Creative Fundraising with Cameron Rogowski at Double Dumplings
  • How Finding my Sister’s Mother Changed my Life with Ed Reese at JEB Commerce
  • Living My Life with an Identical Clone with Christopher Beck at Internet Marketing Inc.
  • How to Change Sex the Easy Way with Maura Hubbell at Moz
  • 4 Signs Your Friend or Loved One is a Birder with Jeremy Schwartz at MediaPro
  • How to Save Humanity in Twenty Minutes a Day with Andrea Dunlop, author & independent book marketing consultant
  • Traumatic Brain Injury & Why Self-Diagnosis Sucks with Blake Denman at RicketyRoo Inc.

Wednesday night MozCon Bash

Bowling: check! Karaoke: check! Photobooth: check! Join us for one last hurrah before we meet again at MozCon 2018. You won’t want to miss this closing night bash — we’ll have plenty of games, food, and fun as we mix and mingle, say “see ya soon” to friends new and old, and reminisce over our favorite lessons from the past 3 days.

Birds-of-a-feather lunch tables

At lunch, you’ll have the opportunity to connect with your fellow community members around the professional topics that matter most to you. There will be seven tables each day with different topics and facilitators; find one with a sign noting the topic and join the conversation to share advice, learn tips and tricks, and make new friends.

Monday, July 17

  • B2B Email Marketing hosted by Steve Manjarrez at Moz
  • E-commerce hosted by Everett Sizemore at Inflow
  • In-house SEO hosted by Kristin Fraccia at Magoosh
  • It’s Just Me — Digital Departments of One hosted by Liz Reuth at Le-vel
  • Linkbuilding hosted by Rachael Brandt at Magoosh
  • On-Page SEO hosted by Cyrus Shepard at Fazillion
  • Travel Website SEO hosted by Michael Cottam at Visual Itineraries

Tuesday, July 18

  • In-house SEO hosted by Jackson Lo at Tripadvisor
  • Link Building hosted by Russ Jones at Moz
  • Mobile Marketing hosted by Bridget Randolph at Hearst Magazines
  • Perceiving Brand Through Digital PR hosted by Manish Dudharejia at E2M Solutions
  • Product Marketing hosted by Brittani Dinsmore at Moz
  • Search Trends hosted by Gianluca Fiorelli at IloveSEO.net
  • Technical SEO hosted by Corey Eulas at Factorial Digital

Wednesday, July 19

Even more ideas for your Seattle adventure!

There are so many wonderful places to see, food to eat, and yes, coffee and craft beer to be consumed. Lots and lots of coffee and craft brews. That’s why a few Mozzers have pulled together their favorite places to check out during your stay in the Emerald City.

No Anchor
“By far my favorite place in Belltown. Incredibly unique beer selection and fresh local food combinations that you can’t find anywhere else.”
Abe Schmidt

Marination Ma Kai
“Marination is one of the top food trucks in the country and now they have several brick and mortar restaurants. Marination Ma Kai is located in West Seattle and has a big outdoor patio with gorgeous views of downtown Seattle, it’s a summer hotspot for a cool beverage and noms. Why is it quintessential Seattle? Not only is the food life changing, the view amazing, but getting there is an adventure! Just walk down to the waterfront and hop on the wonderful Seattle Water Taxi. The trip from downtown drops riders off right at the restaurant.”

Rapha Seattle
“If you LOVE bicycles this place is a must-visit. One of only five US Rapha Clubhouses, Rapha Seattle is home to delicious coffee, fine food, and bicycle events.

The atmosphere is cool and inviting. Visitors are surrounded by the coolest bicycle gear and memorabilia. You can rent a Canyon bicycle to explore the city (Which is a big deal because you cannot buy Canyon bikes in America, yet). Rapha also does guided bike rides for the public and member only rides.”
James Daugherty

Taylor Shellfish (Pioneer Square, Capitol Hill, or Queen Anne)
“The Puget Sound offers the best oysters in the world. What’s great about Taylor Shellfish is that it’s all about the oysters, the drinks and the people you’re with in a simple, unpretentious, come-as-you-are atmosphere. There’s nothing more quintessential to Seattle than that.”

The Point in Burien
“An all-around great bar to grab a bite and a drink if your flight is delayed or you need to kill some time near the airport. The Point is 10 minutes from SeaTac, has a fantastic menu (including lots of gluten free options), a great cocktail menu, tap list, and big-screen TVs.”
Brittani Dinsmore

Hattie’s Hat
“Ballard was an old fishing village. Hattie’s Hat bar has been in continuous operation for over 100 years and the bar that you sit at was installed in 1907 or something. Incredible. The bartenders are all in Seattle bands, some of them moderately famous from the 1990s. Go in the early afternoon. Ask for Lupe or Lara. Sit at the bar. You’ll thank me for it.“
Brian Childs

Holy Mountain Brewery
“Seattle is a beer city. Holy Mountain makes Seattle’s best beer. Go there.”
Evelyn Baek

The Whale Wins, Revel, Joule, and Fremont Brewing
“All are in the Fremont area and are each tasty in their own right. Besides if you don’t like those options there are plenty of places to choose from in Fremont”
Steve Manjarrez

Ada’s Technical Books and Cafe
“Coffee + super sleek bookstore that encourages women in tech and science. Need I say more?”
Meredith Crandell

Still hungry? Check out:

And don’t miss our posts from years past, which are full of even more recommendations: 2016, 2015, 2014, 2013, and 2012.

If you’re looking to connect with fellow attendees, please join our MozCon Facebook Group.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Win a Ticket + Lodging to MozCon 2017 – On Us!

Posted by Danielle_Launders

That’s right! We’re bringing back our MozCon contest where you, our community, can submit your creative entries for a chance to come to MozCon 2017 for free! Last year we had so many impressive and deserving entries, it was hard to choose just one… which is why this year there will be three lucky winners. Yes! You read correctly — three lucky people will be able to attend MozCon for free. Moz will not only cover your registration, but you’ll also have reserved VIP front-row seating, and we’ll pick up the bill for accommodations at the Grand Hyatt.

This is your chance to meet Roger, laugh and cry at MozCon Ignite, converse with speakers during lunch, and of course learn all the things! To enter, just send over a unique piece of content telling us why we should send you to MozCon. Make sure your entry is both original and creative!

First up: Create!

Brainstorm and create something new, unique, and compelling. Last year we saw tons of ingenious ideas, including:

  • Videos (must be one minute or less)
  • Blog posts
  • Songs
  • Books
  • Drawings
  • Slide decks
  • Anything else you can dream up!

These are just a few examples; there’s plenty of room for you to come up with your own trail-blazing ideas. Perhaps you’ll film your own Whiteboard Friday detailing why you should come, or record a custom-written, sung-from-the-heart tune about why you love MozCon? Get creative and show us what you got!

Secondly: Submit

Once your content has been perfected and is ready to share, tweet us a link @Moz and use the hashtag #mozcon by Sunday, June 18 at 5:00pm PDT. To keep things fair, there will be no exceptions — so make sure to follow instructions and don’t forget to include your contact information (name and email address) somewhere easily visible within your content. We need to be able to connect with you if you’re a winner!

All submissions will be reviewed and voted on by Moz staff. We’ll let the votes of 150 Mozzers decide the top 3 entries.

Let’s break it down:

  • Submissions close on Sunday, June 18 at 5:00pm PDT
  • Entries will be judged by Mozzers based on creativity and uniqueness of content
  • Winners will be announced and the winning entry shared from @Moz via Twitter on Monday, June 26
  • You must be able to attend MozCon, July 17–19 2017, in Seattle. Prizes are non-transferrable.
  • All submissions must adhere to the MozCon Code of Conduct
  • Contest is void where prohibited by law.
  • The value of the prize will be reported for tax purposes as required by law; the winner will receive an IRS form 1099 at the end of the calendar year and a copy of such form will be filed with the IRS. The winner is solely responsible for reporting and paying any and all applicable taxes related to the prizes and paying any expenses associated with any prize which are not specifically provided for in the official rules.

What 3 lucky people win:

  • A free ticket to MozCon 2017, including optional VIP front-row seating (valued at $ 1049)
  • Accommodations with suite upgrade at the Grand Hyatt from July 16–20, 2017 (valued at $ 1,300+)

All right, what are you waiting for? Time to start on those submissions. Best of luck to you all!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

How to Delete a Google My Business Listing – A Common Question with a Complex Answer

Posted by MiriamEllis

“How do I delete a Google listing?” is an FAQ on local SEO forums — and it represents an oversimplification of a complicated and multifaceted issue. The truth is, simple deletion is seldom the answer. Rather, most events that arise in the course of doing business require knowing which steps to take to properly manage GMB listings so that they’re helping your business instead of harming it.

When it comes to managing unwanted or problematic Google My Business listings, it’s a case of horses for courses. There isn’t a single set of instructions you can reliably follow, because your particular scenario defines which steps you should take. The following table should help you identify common situations and choose the one that most closely matches yours. From there, you’ll learn which actions are available to you, and which ones, unfortunately, can’t be accomplished.

Because management of problem GMB listings usually requires either being in control of them or unverifying them, our chart begins with three verification scenarios, and then moves on to cover other typical business events.

Scenario

Context

Steps

Notes

Unverify a Verified Listing You Control

You have a listing in your GMB dashboard that you no longer wish to control.

  • Log into your GMB dashboard
  • Click “edit”
  • Click the “info” tab
  • Click “remove listing”
  • Check all the checkboxes
  • Click “delete account”

No worries: The last step does NOT delete your Google account or the listing, itself. It simply un-verifies it so that you are no longer controlling it. The listing will still exist and someone else can take control of it.

Verify an Unverified Listing to Gain Control

You need to take control of an unwanted listing. You can tell it’s not verified, because it’s marked “claim this business” in Google Maps or “own this business?” in the knowledge panel.

Once you’ve verified the listing, you can take next steps to manage it if it’s problematic.

Take Control of a Listing Someone Else Verified

You need to take control of an unwanted listing, but someone else has verified it. You can tell it’s verified, because it lacks the attributes of “claim this business” in Google Maps or “own this business?” in the knowledge panel.

  • Contact Google via these steps
  • Google will contact the owner
  • If Google doesn’t hear back from the owner in one week, you can verify the listing

There are some anecdotal accounts of owners being able to prove to Google their rights to control a listing based on their control of an email address that matches the website domain, but no guarantees. You may need to seek legal counsel to mediate resolution with a third party who refuses to relinquish control of the listing.

Manage a Duplicate Listing for a Brick-and-Mortar Business

Your business serves customers at your location (think a retail shop, restaurant, law practice). You find more than one listing representing the business, either at its present location, at an incorrect location, or at a previous location.

  • If the address exactly matches the correct, current address of the business, contact Google to request that they merge the two listings into one.
  • If the address contains an error and the business never existed there, use the “suggest an edit” link on Google Maps, toggle the yes/no switch to “yes,” and choose the “never existed” radio button.
  • If the address is one the business previously occupied, see the section in this table on business moves.

If reviews have become associated with a business address that contains an error, you can try to request that the reviews be transferred PRIOR to designating that the business “never existed” in Google Maps.

Manage a Duplicate Listing for a Service Area Business (SAB)

Your business serves customers at their locations (think a plumber, landscaper, or cleaning service). You find more than one listing representing the business.

  • Once you’ve verified the duplicate listing, contact Google to request that they merge the two listings into one.

Remember that Google’s guidelines require that you keep addresses for SAB listings hidden.

Manage an Unwanted Listing for a Multi-Practitioner Business

The business has multiple partners (think a legal firm or medical office). You discover multiple listings for a specific partner, or for partners who no longer work there, or for partner who are deceased.

  • Unfortunately, Google will not remove multi-practitioner listings for partners who are presently employed by the business.
  • If the partner no longer works there, read this article about the dangers of ignoring these listings. Then, contact Google to request that they designate the listing as “moved” (like when a business moves) to the address of the practice — not to the partner’s new address. *See notes.
  • If, regrettably, a partner has passed away, contact Google to show them an obituary.

In the second scenario, Google can only mark a past partner’s listing as moved if the listing is unverified. If the listing is verified, it would be ideal if the old partner would unverify it for you, but, if they are unwilling to do so, at least try to persuade them to update the listing with the details of their new location as a last resort. Unfortunately, this second option is far from ideal.

On a separate note, if the unwanted listing pertains to a solo-practitioner business (there’s a listing for both the company and for a single practitioner who operates the company), you can contact Google to ask that they merge the two listings in an effort to combine the ranking power of the two listings, if desired.

Manage a Listing When a Business Moves

Your company is moving to a new location. You want to avoid having the listing marked as “permanently closed,” sending a wrong signal to consumers that you’ve gone out of business.

  • Update your website with your new contact information and driving directions
  • Update your existing GMB listing in the Google My Business dashboard. Don’t create a new listing!
  • Update your other local business listings to reflect your new info. A product like Moz Local can greatly simplify this big task.

Be sure to use your social platforms to advertise your move.

Be sure to be on the lookout for any new duplicate listings that may arise as a result of a move. Again, Moz Local will be helpful for this.

Google will generally automatically move your reviews from your old location to your new one, but read this to understand exceptions.

Manage a Listing Marked “Permanently Closed”

A listing of yours has ended up marked as “permanently closed,” signaling to consumers that you may have gone out of business. Permanently closed listings are also believed to negatively impact the rankings of your open business.

  • If the “permanently closed” label exists on a verified listing for a previous location the business occupied, unverify the listing. Then contact Google to ask them to mark it as moved to the new location. This should rectify the “permanently closed” problem.
  • If the permanently closed listing exists on a listing for your business that someone else as verified (i.e., you don’t control the listing), please see the above section labeled “Take Control of a Listing Someone Else Verified.” If you can get control of it in your dashboard and then unverify it, you’ll then be able to contact Google to ask them to mark it as moved.

The “permanently closed” label can also appear on listings for practitioners who have left the business. See the section of this chart labeled “Manage an Unwanted Listing for a Multi-Practitioner Business.”

Manage a Merger/Acquisition

Many nuances to this scenario may dictate specific steps. If the merger/acquisition includes all of the previous physical locations remaining open to the public under the new name, just edit the details of the existing GMB listings to display that new name. But, if the locations that have been acquired close down, move onto the next steps.

  • Don’t edit the details of the old locations to reflect the new name
  • Unverify the listings for the old locations
  • Finally, contact Google to ask them to mark all the old locations listings as moved to the new location.

Mergers and acquisitions are complex and you may want to hire a consultant to help you manage this major business event digitally. You may also find the workload significantly lightened by using a product like Moz Local to manage the overhaul of core citations for all the businesses involved in the event.

Manage a Spam Listing

You realize a competitor or other business is violating Google’s guidelines, as in the case of creating listings at fake locations. You want to clean up the results to improve their relevance to the local community.

  • Find the listing in Google Maps
  • Click the “suggest an edit” link
  • Toggle the yes/no toggle to “yes”
  • Choose the radio button for “spam”
  • Google will typically email you if/when your edit is accepted

Google doesn’t always act on spam. If you follow the outlined steps and don’t get anywhere with them, you may want to post the spam example in the GMB forum in hopes that a Top Contributor there might escalate the issue.

Unfortunately, spam is very common. Don’t be surprised if a spammer who gets caught comes right back on and continues to spam.

Manage a Listing with Bad Reviews

Your company is embarrassed by the negative reviews that are attached to its GMB listing. You wish you could just make the whole thing disappear.

  • If the reviews violate Google’s policy, consider these steps for taking action. Be advised that Google may not remove them, regardless of clear violations.
  • If the reviews are negative but genuine, Google will not remove them. Remedy the problems, in-house, that consumers are citing and master responding to reviews in a way that can save customers and your business.
  • If the business is unable to remedy structural problems being cited in reviews, the company may lack the necessary components for success.

Short of completely rebranding and moving your business to a new location, your business must be prepared to manage negative reviews. Unless consumers are citing illegal behaviors (in which case, you need legal counsel rather than marketing), negative reviews should be viewed as a FREE blueprint for fixing the issues that customers are citing.

Bear in mind that many unhappy customers won’t take the time to complain. They’ll just go away in silence and never return to your business again. When a customer takes the time to voice a complaint, seize this as a golden opportunity to win him back and to improve your business for all future customers.

Whew! Eleven common Google My Business listing management scenarios, each requiring its own set of steps. It’s my hope that this chart will not only help explain why few cases really come down to deleting GMB listings, and also, that it will serve as a handy reference for you when particular situations arise in your workday.

Helpful links

  1. If you’re not sure if you have problem listings, do a free lookup with the Moz Check Listing tool.
  2. If you’re a Moz Pro member, you have access to our Q&A forum. Please feel free to ask our community questions if you’re unsure about whether a GMB listing is problematic.
  3. The Google My Business Forum can be a good bet for getting advice from volunteer Top Contributors (and sometimes Google staffers) about problem GMB listings. Be prepared to share all of the details of your scenario if you post there.
  4. If you find yourself dealing with difficult Google My Business listing issues on a regular basis, I recommend reading the work of Joy Hawkins, who is one of the best technical local SEOs in the industry.
  5. Sometimes, the only thing you can do is to contact Google directly to try to get help with a tricky problem. Here is their main Contact page. If you’re a Google Adwords customer, you can phone 1-866-2Google and select the option for Google My Business support. Another way to seek help (and this is sometimes the fastest route) is to tweet to Google’s GMB Twitter account. Be advised that not every Google rep has had the benefits of complete training. Some interactions may be more satisfactory than others. And, if you are a digital marketer, do be prepared to set correct client expectations that not all problems can be resolved. Sometimes, even your best efforts may not yield the desired results, due to the limitations of Google’s local product.

Why it’s worth the effort to work to resolve problematic Google listings

Cumulatively speaking, inaccurate and duplicative listings can misinform and misdirect consumers while also sapping your ranking strength. Local business listings are a form of customer service, and when this element of your overall marketing plan is neglected, it can lead to significant loss of traffic and revenue. It can also negatively impact reputation in the form of negative reviews citing wrong online driving directions or scenarios in which customers end up at the old location of a business that has moved.

Taken altogether, these unwanted outcomes speak to the need for an active location data management strategy that monitors all business listings for problems and takes appropriate actions to remedy them. Verifying listings and managing duplicates isn’t glamorous work, but when you consider what’s at stake for the business, it’s not only necessary work, but even heroic. So, skill up and be prepared to tackle the thorniest situations. The successes can be truly rewarding!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Google’s Rolling Out AMP to the Main SERPs – Are You Prepared?

Posted by jenstar

Are you ready for AMP? Ready or not, it’s coming to Google search results, and it’s arriving in a big way. Google has announced that they’ll be showing Accelerated Mobile Pages in their search results for the “ten blue links.”

This means that sites that aren’t news-specific now have the opportunity to show AMP pages in Google’s search results.

AMP is a very lightweight version of a webpage that has been stripped of the many elements that cause a regular webpage to display slowly, such as tons of cookies, third-party Javascript, and slow-loading ad networks. This results in a page that loads lightning-fast, which is great for those who are on slower connections or simply don’t want to wait for a regular, heavier page to load.

AMP has had a fairly positive reception both from site owners and from users. It’s much faster and more streamlined for searchers, especially on mobile devices that tend to be a little bit slower connection-wise.

Not a ranking boost

It’s important to note that AMP pages in the mobile search results do not receive an additional ranking boost. Google currently has the mobile-friendly ranking boost, and because AMP pages are mobile-friendly, they receive the same ranking boost.

There isn’t an additional incentivized reason to use AMP strictly for ranking purposes. Don’t switch everything to AMP simply because you think you’ll get an additional ranking boost to help you beat out competitors.

There are indirect ranking benefits, though. For example, if searchers seek out AMP results, some sites could see higher clickthrough rates on their AMP pages. And as consumer awareness grows about AMP, that will likely rise.

Replacing mobile results

Google isn’t showing additional search results based on AMP specifically. Sites with AMP won’t show two versions of the same page in the search results, one mobile and one AMP. Rather, if any of the pages in the SERPs have an AMP version, Google will show that instead of the mobile or desktop page that would normally appear.

Just as a mobile-friendly page has a tag at the front of the description snippet showing that it’s mobile-friendly, AMP has the same thing. For AMP, those results are tagged with AMP and an encircled lightning bolt before the description.

Will you be penalized for not displaying AMP?

No, Google is not planning to penalize a site simply because it isn’t AMP. Your site will still have the same positioning in mobile search results as the mobile version of the page.

Google will simply replace the mobile-friendly page — or the desktop page, if a mobile-specific page isn’t available — and show the AMP version of the page.

For sites that don’t have an AMP version of their page in the SERPs, Google will opt to show the mobile-friendly page first, or the desktop page if there’s no mobile-friendly version. But sites that are AMP-less will not be demoted in any way.

Do other ranking factors apply to AMP?

There’s no reason to believe that some of the regular ranking factors wouldn’t apply to AMP pages, especially for those websites that are currently AMP-only. However, because of the nature of AMP, they likely wouldn’t be a concern.

This will include things like page speed. Because AMP pages are significantly faster than mobile pages, there’s no reason for a site owner to worry that they could be negatively impacted in rankings because of slow page speed.

Likewise, with the above-the-fold algorithm that targets sites with significant ads above the content, this again isn’t cause for concern as most AMP pages are “ad-light.”

But it is important to remember that while Google is crawling the mobile version and AMP version of pages, rankings are based on the desktop page. Thus, faults with the desktop page — such as slow page speed — could impact the overall performance of your AMP page positioning.

Should sites ditch their mobile version for AMP?

This question is going to become a bit more interesting as this rolls out to the 10 blue links. There are sites that are currently only available in AMP, such as the AMP Project website itself. But with Google now showing the AMP version in place of the mobile version, should site owners be concerned about having a mobile site?

Well, as of now, this is a Google AMP initiative. Other search engines haven’t announced the use of AMP in their own search results. First you’ll need to consider whether other search engines have issues with sites that are AMP-only — for reference, Bing has no problems indexing AMP-only sites.

Another consideration is that AMP pages are definitely more bare-bones than your typical mobile page. You need to look at it from a user-experience point of view. Are there elements on a page that will negatively impact your customer’s experience if they’re not displayed on AMP?

Also, look at it from a resource perspective. For sites that maintain a separate m.site already, maintaining three versions of the page could be impactful from a resource and work hours perspective. This won’t be as much of a concern for those using responsive design, since changes made to desktop automatically get rolled out to the mobile version.

Will users gravitate to AMP results?

Just as many searchers gravitate towards search results that are tagged “mobile-friendly,” it’s very likely that some searchers, especially those on slower connections or those concerned about their data usage, will gravitate to those results that are in AMP format.

Also, because AMP pages tend to be less ad-heavy when compared to their mobile counterparts, some prefer AMP for this reason alone.

How popular is AMP?

At Google I/O, Google revealed that it has more than 150 million AMP documents indexed in their search results. And those documents are coming from 650,000 domains.

Many new sites are getting on board with AMP daily, and many large sites have added AMP pages to their entire website.

Is it country-specific?

Google is still rolling out AMP in the news carousel internationally. When AMP rolls out in the “ten blue links,” it will be an international launch. So even if your country isn’t currently showing AMP in the news section of the search results, AMP will show in the main search results when this goes live.

Is it live now?

No, this isn’t pushed live in the SERPs right now. Google has not said precisely when this will happen, other than they’ll be making “this feature more broadly available later this year.”

Google notes that they’ve delayed this launch to allow sites time to implement Accelerated Mobile Pages before it goes live for all results. Because of AMP being so new and due to the learning curve involved, pre-announcing the change is particularly welcome in this case.

Ecommerce sites

Ecommerce sites can effectively implement AMP, and many have successfully done so in preparation for the suspected launch. But with no specific timeline for when it’s going live, there’s a good chance this could go live in time for the upcoming holiday shopping season.

At the very least, ecommerce sites should make sure their content pages are AMP-ready. It doesn’t seem as though the various shopping carts have made their software or plug-ins AMP-friendly yet. But I expect that, on the heels of Google’s announcement, they’re scrambling to make their carts (or at the very least the product pages created through the cart software) AMP-friendly.

How to view AMP in search results

Google is only showing this to searchers who search through the Google AMP Demo URL, which can be accessed at g.co/ampdemo.

The demo shows how Google currently plans to display AMP in the search results, although Google could change the appearance before the official launch goes live in the SERPs.

Tracking AMP analytics

For those who have yet to set it up, Google does include information about AMP in their search analytics. You can drill down and see the specific keywords, positioning, clicks, and more just for AMP alone.

To find it, login to Search Console, click on Search Traffic, then Search Analytics. Underneath “Search Appearance” you can select “AMP.” Now you can drill down into AMP by pages, queries, etc. to learn more about how your AMP pages are seen and performing in search.

Setting up AMP

For those that are using a popular CMS such as WordPress or Joomla, there are already plug-ins to convert pages into AMP format. This makes it very easy for websites to make their websites AMP-enabled for searchers looking for AMP specifically.

WordPress AMP Plugin

Baby-step your way into AMP

You don’t have to implement AMP across the entire site at once. You can choose to test it out on a few pages first, or convert sections at a time so the errors are less daunting. Those errors would also be cleaned up as you roll out the next section of the site to AMP.

AMP errors

It can take up to a week or so for AMPs to show up in Google Search Console, so it’s important that you go back and check which AMP errors show up on your site. There are some common errors, usually related to markup used in themes or missing logos, but Google’s help documents are fairly intuitive.

Google shows AMP errors on a per-page basis in their Google Search Console error reporting. And until these errors are fixed, those particular pages will not show AMP in the search results. The pages that are error-free will show. Site owners can correct the most widespread or significant errors first, then tackle the individual pages with errors.

AMP errors also show up on individual pages for specific elements on the page, such as where you might have embedded video or other elements that are not AMP-friendly.

You can find AMP errors, along with the number of indexed AMP pages, in the AMP section (within the Search Appearance section) of Google Search Console.

Google also has an AMP validator available.

Advertising on AMP pages

For those site owners concerned about loss of revenue, there are ad networks that comply with the AMP standards and can be used on your AMP pages.

The AMP Project maintains an extensive list of supported ad networks, including the most popular ones (AdSense, DoubleClick, and OpenX).

Social sharing & AMP

One of the newer features that Google has added to AMP is the ability to include share buttons for various social media platforms. Some site owners were reluctant to lose the potential for shares, since many sites derive a significant portion of their traffic this way.

The <AMP-SOCIAL-SHARE> tag doesn’t yet support all social media platforms, but it supports the most popular ones (such as Facebook, Twitter, Pinterest, and Google+).

Checking for AMP validation

There’s a great extension for Chrome that will show you if an AMP validates properly, as well as show you any webpage that has an AMP version available. This is pretty handy if you need to do competitive research and wish to learn more about the pages or types of pages that your competitors have AMP enabled for.

The lightning bolt will appear green for a page that is AMP, and it will show blue with a link to tell you that an AMP version of the page is available. Just click it to view the page as AMP instead.

This extension also enables you to view the page from a desktop computer. Right now, the Google AMP demo requires you to use it from a mobile device, which isn’t ideal for those looking at AMP from a site-owner perspective.

Getting help with AMP

Google also has an AMP support forum on the Google Webmaster Help forums for site owners running into any issues implementing AMP or getting it indexed properly. Multiple AMP experts regularly post in the forum answering questions and troubleshooting.

AMP resources

Suddenly find yourself having to get up to speed with AMP and don’t know where to start? Here are some useful industry resources.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Wake Up, SEOs – the NEW New Google is Here

Posted by gfiorelli1

In 2011 I wrote a post here on Moz. The title was “Wake Up SEOs, the New Google is Here.”

In that post I presented some concepts that, in my personal opinion, we SEOs needed to pay attention to in order to follow the evolution of Google.

Sure, I also presented a theory which ultimately proved incorrect; I was much too confident about things like rel=”author”, rel=”publisher”, and the potential decline of the Link Graph influence.

However, the premises of that theory were substantially correct, and they remain correct five years later:

  1. Technical SEO is foundational to the SEO practice;
  2. The user is king, which means that Google will focus more and more on delivering the best user search experience — hence, SEO must evolve from “Search Engine Optimization” into “Search Experience Optimization”;
  3. That web performance optimization (SiteSpeed), 10X content, and semantics would have played a big role in SEO.

Many things have changed in our industry in the past 5 years. The time has come to pause, take a few minutes, and assess what Google is and where it’s headed.

I’ll explain how I “study” Google and what I strongly believe we, the SEOs, should pay attention to if we want not only to survive, but to anticipate Google’s end game, readying ourselves for the future.

Obviously, consider that, while I believe it’s backed up by data, facts, and proof, this is my opinion. As such, I kindly ask you not to take what I write for granted, but rather as an incentive for your own investigations and experiments.

Exploring the expanded universe of Google

Credit: Robson Ribeiro

SEO is a kingdom of uncertainty.

However, one constant never changes: almost every SEO dreams of being a Jedi at least once in her life.

I, too, fantasize about using the Force… Gianlu Ka Fiore Lli, Master Jedi.

Honestly, though, I think I’m more like Mon Mothma.

Like her, I am a strategist by nature. I love to investigate, to see connections where nobody else seems to see them, and to dig deeper into finding answers to complex questions, then design plans based on my investigations.

This way of being means that, when I look at the mysterious wormhole that is Google, I examine many sources:

  1. The official Google blogs;
  2. The “Office Hours” hangouts;
  3. The sometimes contradictory declarations Googlers make on social media (when they don’t share an infinite loop of GIFs);
  4. The Google Patents and the ones filed by people now working for Google;
  5. The news (and stories) about the companies Google acquires;
  6. The biographies of the people Google employs in key areas;
  7. The “Google Fandom” (aka what we write about it);
  8. Rumors and propaganda.

Now, when examining all these sources, it’s easy to create amazing conspiranoiac (conspiracy + paranoia) theories. And I confess: I helped create, believed, and defended some of them, such as AuthorRank.

In my opinion, though, this methodology for finding answers about Google is the best one for understanding the future of our beloved industry of search.

If we don’t dig into the “Expanded Universe of Google,” what we have is a timeline composed only by updates (Panda 1.N, Penguin 1.N, Pigeon…), which is totally useless in the long term:

Click to open a bigger version in a new tab

Instead, if we create a timeline with all the events related to Google Search (which we can discover simply by being well-informed), we begin to see where Google’s heading:

Click to open a bigger version in a new tab

The timeline above confirms what Google itself openly declared:

“Machine Learning is a core, transformative way by which we’re rethinking how we’re doing everything.”
– (Sundar Pichai)

Google is becoming a “Machine Learning-First Company,” as defined by Steven Levy in this post.

Machine learning is becoming so essential in the evolution of Google and search, perhaps we should go beyond listening only to official Google spokespeople like Gary Illyes or John Mueller (nothing personal, just to be clear… for instance, read this enlightening interview of Gary Illyes by Woj Kwasi). Maybe we should start paying more attention to what people like Christine Robson, Greg Corrado, Jeff Dean, and the staff of Google Brain write and say.

The second timeline tells us that starting in 2013 Google started investing money, intellectual efforts, and energy on a sustained scale in:

  • Machine learning;
  • Semantics;
  • Context understanding;
  • User behavior (or “Signals/Semiotics,” as I like to call it).

2013: The year when everything changed

Google rolled out Hummingbird only three years ago, but it’s not just a saying: that feels like decades ago.

Let’s quickly rehash: what’s Hummingbird?

Hummingbird is the Google algorithm as a whole. It’s composed of four phases:

  1. Crawling, which collects information on the web;
  2. Parsing, which identifies the type of information collected, sorts it, and forwards it to a suitable recipient;
  3. Indexing, which identifies and associates resources in relation to a word and/or a phrase;
  4. Search, which…
    • Understands the queries of the users;
    • Retrieves information related to the queries;
    • Filters and clusters the information retrieved;
    • Ranks the resources; and
    • Paints the search result page and so answers the queries.

This last phase, Search, is where we can find the “200+ ranking factors” (RankBrain included) and filters like Panda or anti-spam algorithms like Penguin.

Remember that there are as many search phases as vertical indices exist (documents, images, news, video, apps, books, maps…).

We SEOs tend to fixate almost exclusively on the Search phase, forgetting that Hummingbird is more than that.

This approach to Google is myopic and does not withstand a very simple logical square exercise.

  1. If Google is able to correctly crawl a website (Crawling);
  2. to understand its meaning (Parsing and Indexing);
  3. and, finally, if the site itself responds positively to the many ranking factors (Search);
  4. then that website will be able to earn the organic visibility it aims to reach.

If even one of the three elements of the logical square is missing, organic visibility is missing; think about non-optimized AngularJS websites, and you’ll understand the logic.

The website on the left in a non-JS enabled browser. On the right, JS enabled reveals all of the content. Credit: Builtvisible.com

How can we be SEO Jedi if we only see one facet of the Force?

Parsing and indexing: often forgotten

Over the past 18 months, we’ve a sort of technical SEO Renaissance, as defined by Mike King in this fundamental deck and despite attempts to classify technical SEOs as makeup artists.

On the contrary, we’re still struggling to fully understand the importance of the Parsing and Indexing phases.

Of course, we can justify that by claiming that parsing is the most complex of the four phases. Google agrees, as it openly declared when announcing SintaxNet.

Announcing SintaxNext.gif

However, if we don’t optimize for parsing, then we’re not going to fully benefit from organic search, especially in the months and years to come.

How to optimize for parsing and indexing

As a premise to parsing and indexing optimization, we must remember an oft-forgotten aspect of search, which Hummingbird highlighted and enhanced: entity search.

If you remember what Amit Singhal said when he announced Hummingbird, he declared that it had “something of Knowledge Graph.”

That part was — and I’m simplifying here for clarity’s sake — entity search, which is based over two kinds of entities:

  1. Named entities are what the Knowledge Graph is about, such as persons, landmarks, brands, historic movements, and abstract concepts like “love” or “desire”;
  2. Search entities are “things” related to the act of searching. Google uses them to determine the answer for a query, especially in a personalized context. They include:
    • Query;
    • Documents and domain answering to the query;
    • Search session;
    • Anchor text of links (internal and external);
    • Time when the query is executed;
    • Advertisements responding to a query.

Why does entity search matter?

It matters because entity search is the reason Google better understands the personal and almost unique context of a query.

Moreover, thanks to entity search, Google better understands the meaning of the documents it parses. This means it’s able to index them better and, finally, to achieve its main purpose: serving the best answers to the users’ queries.

This is why semantics is important: semantic search is optimizing for meaning.

Credit: Starwars.com

It’s not a ranking factor, it’s not needed to improve crawling, but it is fundamental for Parsing and Indexing, the big forgotten-by-SEOs algorithm phases.

Semantics and SEO

First of all, we must consider that there are different kinds of semantics and that, sometimes, people tend to get them confused.

  1. Logical semantics, which is about the relations between concepts/linguistic elements (e.g.: reference, presupposition, implication, et al)
  2. Lexical semantics, which is about the meaning of words and their relation.

Logical semantics

Structured data is the big guy right now in logical semantics, and Google (both directly and indirectly) is investing a lot in it.

A couple of months ago, when the mainstream marketing gurusphere was discussing the 50 shades of the new Instagram logo or the average SEO was (justifiably) shaking his fists against the green “ads” button in the SERPs, Google released the new version of Schema.org.

This new version, as Aaron Bradley finely commented here, improves the ability to disambiguate between entities and/or better explain their meaning.

For instance, now:

At the same time, we shouldn’t forget to always use the most important property of all: “SameAs”, one of few properties that’s present in every Schema.org type.

Finally, as Mike Arnesen recently explained quite well here on the Moz blog, take advantage of the semantic HTML attributes ItemRef and ItemID.

How do we implement Schema.org in 2016?

It is clear that Google is pushing JSON-LD as the preferred method for implementing Schema.org

The best way to implement JSON-LD Schema.org is to use the Knowledge Graph Search API, which uses the standard Schema.org types and is compliant with JSON-LD specifications.

As an alternative, you can use the recently rolled out JSON-LD Schema Generator for SEO tool by Hall Analysis.

To solve a common complaint about JSON-LD (its volume and how it may affect the performance of a site), we can:

  1. Use Tag Manager in order to fire Schema.org when needed;
  2. Use PreRender in order to let the browser begin uploading the pages your users may visit after the one they’re currently on, anticipating the upload of the JSON-LD elements of those pages.

The importance Google gives to Schema.org and structured data is confirmed by the new and radically improved version of the Structured Data Testing Tool, which is now more actionable for identifying mistakes and test solutions thanks to its JSON-LD (again!) and Schema.org contextual autocomplete suggestions.

Semantics is more than structured data #FTW!

One mistake I foresee is thinking that semantic search is only about structured data.

It’s the same kind of mistake people do in international SEO, when reducing it to hreflang alone.

The reality is that semantics is present from the very foundations of a website, found in:

  1. Its code, specifically HTML;
  2. Its architecture.

HTML

Click to open a bigger version in a new tab

Since its beginnings, HTML included semantic markup (e.g.: title, H1, H2…).

Its latest version, HTML5, added new semantic elements, the purpose of which is to semantically organize the structure of a web document and, as W3C says, to allow “data to be shared and reused across applications, enterprises, and communities.”

A clear example of how Google is using the semantic elements of HTML are its Featured Snippets or answer boxes.

As declared by Google itself (“We do not use structured data for creating Featured Snippets”) and explained well by Dr. Pete, Richard Baxter, and very recently Simon Penson, the documents that tend to be used for answer boxes usually display these three factors:

  1. They already rank on the first page for the query pulling out the answer box;
  2. They positively answer using basic on-page factors;
  3. They have a clean — or almost clean — HTML code

The conclusion, then, is that semantic search starts in the code and that we should pay more attention to those “boring,” time-consuming, not-a-priority W3C error reports.

Architecture

The semiotician in me (I studied semiotics and the philosophy of language in university with the likes of Umberto Eco) cannot help but not consider information architecture itself as semantics.

Let me explain.

Open http://www.starwars.com/ in a tab of your browser to follow along below

Everything starts with the right ontology

Ontology is a set of concepts and categories in a subject area (or domain) that shows their properties and the relations between them.

If we take the Starwars.com site as example, we can see in the main menu the concepts in the Star Wars subject area:

  1. News/Blog;
  2. Video;
  3. Events;
  4. Films;
  5. TV Shows;
  6. Games/Apps;
  7. Community;
  8. Databank (the Star Wars Encyclopedia).
Ontology leads to taxonomy (because everything can be classified)

If we look at Starwars.com, we see how every concept included in the Star Wars domain has its own taxonomy.

For instance, the Databank presents several categories, like:

  1. Characters;
  2. Creatures;
  3. Locations;
  4. Vehicles;
  5. Et cetera, et cetera.
Ontology and taxonomy, then, lead to context

If we think of Tatooine, we tend to think about the planet where Luke Skywalker lived his youth.

However, if we visit a website about deep space exploration, Tatooine would be one of the many exoplanets that astronomers have discovered in the past few years.

As you can see, ontology (Star Wars vs celestial bodies) and taxonomies (Star Wars planets vs exoplanets) determine context and help disambiguate between similar entities.

Ontology, taxonomy, and context lead to meaning

The better we define the ontology of our website, structure its taxonomy, and offer better context to its elements, the better we explain the meaning of our website — both to our users and to Google.

Starwars.com, again, is very good at doing this.

For instance, if we examine how it structures a page like the one on TIE fighters, we see that every possible kind of content is used to help explain what a TIE fighter is:

  1. Generic description (text);
  2. Appearances of the TIE fighter in the Star Wars movies (internal links with optimized anchor text);
  3. Affiliations (internal links with optimized anchor text);
  4. Dimensions (text);
  5. Videos;
  6. Photo gallery;
  7. Soundboard (famous quotes by characters. In this case, it would be the classic “zzzzeeewww” sound many of us used as the ring tone on our old Nokias :D );
  8. Quotes (text);
  9. History (a substantial article with text, images, and links to other documents);
  10. Related topics (image plus internal links).

In the case of characters like Darth Vader, the information can be even richer.

The effectiveness of the information architecture of the Star Wars website (plus its authority) is such that its Databank is one of the very few non-Wikidata/Wikipedia sources that Google is using as a Knowledge Graph source.

Click to enlarge

What tool can we use to semantically optimize the structure of a website?

There are, in fact, several tools we can use to semantically optimize the information architecture of a website.

Knowledge Graph Search API

The first one is the Knowledge Graph Search API, because in using it we can get a ranked list of the entities that match given criteria.

This can help us better define the subjects related to a domain (ontology) and can offer ideas about how to structure a website or any kind of web document.

RelFinder

A second tool we can use is RelFinder, which is one of the very few free tools for entity research.

As you can see in the screencast below, RelFinder is based on Wikipedia. Its use is quite simple:

  1. Choose your main entity (eg: Star Wars);
  2. Choose the entity you want to see connections with (eg: Star Wars Episode IV: A New Hope);
  3. Click “Find Relations.”

RelFinder will detect entities related to both (e.g.: George Lucas or Marcia Lucas), their disambiguating properties (e.g.: George Lucas as director, producer, and writer) and factual ones (e.g.: lightsabers as an entity related to Star Wars and first seen in Episode IV).

RelFinder is very useful if we must do entity research on a small scale, such as when preparing a content piece or a small website.

However, if we need to do entity research on a bigger scale, it’s much better to rely on the following tools:

AlchemyAPI and other tools

AlchemyAPI, which was acquired by IBM last year, uses machine and deep learning in order to do natural language processing, semantic text analysis, and computer vision.

AlchemyAPI, which offers a 30-day trial API Key, is based on the Watson technology; it allows us to extract a huge amount of information from text, with concepts, entities, keywords, and taxonomy offered by default.

Resources about AlchemyAPI

Others tools that allow us to do entity extraction and semantic analysis on a big scale are:

Lexical semantics

As said before, lexical semantics is that branch of semantics that studies the meaning of words and their relations.

In the context of semantic search, this area is usually defined as keyword and topical research.

Here on Moz you can find several Whiteboard Friday videos on this topic:

How do we conduct semantically focused keyword and topical research?

Despite its recent update, Keyword Planner still can be useful for performing semantically focused keyword and topical research.

In fact, that update could even be deemed as a logical choice, from a semantic search point of view.

Terms like “PPC” and “pay-per-click” are synonyms, and even though each one surely has a different search volume, it’s evident how Google presents two very similar SERPs if we search for one or the other, especially if our search history already exhibits a pattern of searches related to SEM.

Yet this dimming of keyword data is less helpful for SEOs in that it makes for harder forecasting and prioritization of which keywords to target. This is especially true when we search for head terms, because it exacerbates a problem that Keyword Planner had: combining stemmed keywords that — albeit having “our keyword” as a base — have nothing in common because they mean completely different things and target very different topics.

However (and this is a pro tip), there is a way to discover the most useful keyword, even when they all have the same search volume: how much advertisers bids for it. Trust the market ;-) .

(If you want to learn more about the recent changes to Keyword Planner, go read this post by Bill Slawski.)

Keyword Planner for semantic search

Let’s say we want to create a site about Star Wars lightsabers (yes, I am a Star Wars geek).

What we could do is this:

  1. Open Keyword Planner / Find new Keywords and get (AH!) search volume data;
  2. Describe our product or service (“News” in the snapshot above);
  3. Use the Wikipedia page about lightsabers as a landing page (if your site were Spanish, the Wikipedia should be the Spanish one);
  4. Indicate our product category (Movies & Films above);
  5. Define the target and eventually indicate negative keywords;
  6. Click on “Get Ideas.”

Google will offer us these Ad Groups as results:

Click to open a bigger version in a new tab

The Ad Groups are a collection of semantically related keywords. They’re very useful for:

  1. Individuating topics;
  2. Creating a dictionary of keywords that can be given to writers for text, which will be both natural and semantically consistent.

Remember, then, that Keyword Planner allows us to do other kinds of analysis too, such as breaking down how the discovered keywords/Ad Groups are used by device or by location. This information is useful for understanding the context of our audience.

If you have one or a few entities for which you want to discover topics and grouped keywords, working directly in Keyword Planner and exporting everything to Google Sheets or an Excel file can be enough.

However, when you have tens or hundreds of entities to analyze, it’s much better to use the Adwords API or a tool like SEO Powersuite, which allows you to do keyword research following the method I described above.

Google Suggest, Related Searches, and Moz Keyword Explorer

Alongside with using Keyword Planner, we can use Google Suggest and Related Searches. Not for simply individuating topics that people search and then writing an instant blog post or a landing page about them, but for reaffirming and perfecting our site’s architecture.

Continuing with the example of a site or section specializing in lightsabers, if we look at Google Suggest we can see how “lightsaber replica” is one of the suggestions.

Moreover, amongst the Related Searches for “lightsaber,” we see “lightsaber replica” again, which is a clear signal of its relevance to “lightsaber.”

Finally, we can click on and discover “lightsaber replica”-related searches, thus creating what I define as the “search landscape” about a topic.

The model above is not scalable if we have many entities to analyze. In that case, a tool like Moz Keyword Explorer can be helpful thanks to the options it offers, as you can see in the snapshot below:

Click to open a bigger version in a new tab

Other keywords and topical research sources

Recently, Powerreviews.com presented survey results that state how Internet users tend to prefer Amazon over Google for searching information about a product (38% vs 35%).

So, why not use Amazon for doing keyword and topical research, especially if we are doing it for ecommerce websites or for the MOFU and BOFU phases of our customers’ journey?

We can use the Amazon Suggest:

Or we can use a free tool like the Amazon Keyword Tool by SISTRIX.

The Suggest function, though, is present in (almost) every website that has a search box (your own site, even, if you have it well-implemented!).

This means that if we’re searching for more mainstream and top-of-the-funnel topics, we can use the suggestions of social networks like Pinterest (i.e.: explore the voluptous universe of the “lightsaber cakes” and related topics):

Pinterest, then, is a real topical research goldmine thanks to its tagging system:

Pinterest Lightsaber Tags

On-page

Once we’ve defined the architecture, the topics, and prepared our keyword dictionaries, we can finally work on the on-page facet of our work.

The details of on-page SEO are another post for another time, so I’ll simply recommend you read this evergreen post by Cyrus Shepard.

The best way to grade the semantic search optimization of a written textis to use TF-IDF analysis, offered by sites like OnPage.org (which offers also a clear guide about the advantages and disadvantages of TF-IDF analyisis).

Remember that TF-IDF can also be used for doing competitive semantic search analysis and to discover the keyword dictionaries used by our competitors.

User behavior / Semiotics and context

In the beginning of this post, we saw how Google is heavily investing in better understanding the meaning of the documents it crawls, so to better answer the queries users perform.

Semantics (and semantic search) is only one of the pillars on which Google is basing this tremendous effort.

The other pillar consists of understanding user search behaviors and the context of the users performing a search.

User search behavior

Recently, Larry Kim shared two posts based on experiments he did, demonstrating his theory about how RankBrain is about factors like CTR and dwell time.

While these posts are super actionable, present interesting information with original data, and confirm other tests conducted in the past, these so-called user signals (CTR and dwell time) may not be directly related to RankBrain but, instead, to user search behaviors and personalized search.

Be aware, however, that my statement here above should be taken as a personal theory, because Google itself doesn’t really know how RankBrain works.

AJ Kohn, Danny Sullivan, and David Harry wrote additional interesting posts about RankBrain, if you want to dig into it (for the record, I wrote about it too here on Moz).

Even if RankBrain may be included in the semantic search landscape due to its use of Word2Vec technology, I find it better to concentrate on how Google may use user search behaviors to better understand the relevance of the parsed and indexed documents.

Click-through rate

Since Rand Fishkin presented his theory — backed up with tests — that Google may use CTR as a ranking factor more than two years ago, a lot has been written about the importance of click-through rate.

Common sense suggests that if people click more often on one search snippet than another that perhaps ranks in a higher position, then Google should take that users’ signal into consideration, and eventually lift the ranking of the page that consistently receives higher CTR.

Common sense, though, is not so easy to apply when it comes to search engines, and repeatedly Googlers have declared that they do not use CTR as a ranking factor (see here and here).

And although Google has long since developed a click fraud detection system for Adwords, it’s still not clear if it would be able to scale it for organic search.

On the other hand — let me be a little bit conspiranoiac — if CTR is not important at all, then why Google has changed the pixels of the title tag and meta description? Just for “better design?”

But as Eric Enge wrote in this post, one of the few things we know is that Google filed a patent (Modifying search result ranking based on a temporal element of user feedback, May 2015) about CTR. It’s surely using CTR in testing environments to better calculate the value and grade of other rankings factors and — this is more speculative — it may give a stronger importance to click-through rate in those subsets of keywords that clearly express a QDF (Query Deserves Freshness) need.

What’s less discussed is the importance CTR has in personalized search, as we know that Google tends to paint a custom SERP for each of us depending on both our search history and our personal click-through rate history. They’re key in helping Google determine which SERPs will be the most useful for us.

For instance:

  1. If we search something for the first time, and
  2. for that search we have no search history (or not enough to trigger personalized results), and
  3. the search presents ambiguous entities (i.e.: “Amber“),
  4. then it’s only thanks to our personal CTR/search history that Google will determine which search results related to a given entity to show or not (amber the stone or Amber Rose or Amber Alerts…).

Finally, even if Google does not use CTR as a ranking factor, this doesn’t mean it’s not an important metric and signal for SEOs. We have years of experience and hundreds of tests proving how important is to optimize our search snippets (and now Rich Cards) with the appropriate use of structured data in order to earn more organic traffic, even if we rank worst than our competitors.

Watch time

Having good CTR metrics is totally useless if the pages our visitors land on don’t fulfill the expectation the search snippet created.

This is similar to the difference between a clickbait and a persuasive headline. The first will probably cause a click back to the search results page and the second, instead, will trap and engage the visitors.

The ability of a site to retain its users is what we usually call dwell time, but that Google defines as watch time in this patent: Watch Time-Based Ranking (March 2013).

This patent is usually cited in relation to video because the patent itself uses video as content example, but Google doesn’t restrict its definition to videos alone:

In general, “watch time” refers to the total time that a user spends watching a video. However, watch times can also be calculated for and used to rank other types of content based on an amount of time a user spends watching the content.

Watch time is indeed a more useful user signal than CTR for understanding the quality of a web document and its content.

Are you skeptical and don’t trust me? Trust Facebook, then, because it also uses watch time in its news feed algorithm:

We’re learning that the time people choose to spend reading or watching content they clicked on from News Feed is an important signal that the story was interesting to them.


We are adding another factor to News Feed ranking so that we will now predict how long you spend looking at an article in the Facebook mobile browser or an Instant Article after you have clicked through from News Feed. This update to ranking will take into account how likely you are to click on an article and then spend time reading it. We will not be counting loading time towards this — we will be taking into account time spent reading and watching once the content has fully loaded. We will also be looking at the time spent within a threshold so as not to accidentally treat longer articles preferentially.

With this change, we can better understand which articles might be interesting to you based on how long you and others read them, so you’ll be more likely to see stories you’re interested in reading.

Context and the importance of personalized search

I usually joke and say that the biggest mistake a gang of bank robbers could do is bring along their smartphones. It’d be quite easy to do PreCrime investigations simply by checking their activity board, which includes their location history on Google Maps.

A conference day in Adelaide.

In order to fulfill its mission of offering the best answers to its users, Google must not only understand the web documents it crawls so to index them properly, and not only improve its own ranking factors (taking into consideration the signals users give during their search sessions), but it also needs to understand the context in which users performs a search.

Here’s what Google knows about us:

It’s because of this compelling need to understand our context that Google hired the entire Behav.io team back in 2013.

Behav.io, if you don’t know already, was a company that developed an alpha test software based on its open source framework Funf (still alive), the purpose of which was to record and analyze the data that smartphones keep track of: location, speed, nearby devices and networks, phone activity, noise levels, et al.

All this information is required in order to better understand the implicit aspects of a query, especially if done from a smartphone and/or via voice search, and to better process what Tom Anthony and Will Critchlow define as compound queries.

However, personalized search is also determined by (again) entity search, specifically by search entities.

The relation between search entities creates a “probability score,” which may determine if a web document is shown in a determined SERP or not.

For instance, let’s say that someone performs a search about a topic (e.g.: Wookies) for which she never clicked on a search snippet of our site, but on another that had content about that same topic (e.g.: Wookieepedia) and which linked to the page about it on our site (e.g.: “How to distinguish one wookiee from another?”).

Those links — specifically their anchor texts — would help our site and page to earn a higher probability score than a competitor site that isn’t linked to by those sites present in the user’s search history.

This means that our page will have a better probability of appearing in that user’s personalized SERP than our competitors’.

You’re probably asking: what’s the actionable point of this patent?

Link building/earning is not dead at all, because it’s relevant not only to the Link Graph, but also to entity search. In other words, link building is semantic search, too.

The importance of branding and offline marketing for SEO

One of classic complaints SEOs have about Google is how it favors brands.

The real question, though, should be this: “Why aren’t you working to become a brand?”

Be aware! I am not talking about “vision,” “mission,” and “values” here — I’m talking about plain and simple semantics.

All throughout this post I spoke of entities (named and search ones), cited Word2Vec (vectors are “vast amounts of written language embedded into mathematical entities”), talked about lexical semantics, meaning, ontology, personalized search, and implied topics like co-occurrences and knowledge base.

Branding has a lot to do with all of these things.

I’ll try to explain it with a very personal example.

Last May in Valencia I debuted as conference organizer with The Inbounder.

One of the problems I faced when promoting the event was that “inbounder,” which I thought was a cool name for an event targeting inbound marketers, is also a basketball term.

The problem was obvious: how do I make Google understand that The Inbounder was not about basketball, but digital marketing?

The strategy we followed from the very beginning was to work on the branding of the event (I explain more about The Inbounder story here on Inbound.org).

We did this:

  • We created small local events, so as to
    • develop presence in local newspapers online and offline, a tactic that also obliged marketers to search on Google about the event using branded keywords (e.g.: “The Inbounder conference,” “The Inbounder Inbound Marketing Conference,” etc…), and
    • click on our search results snippets, hence activating personalized search
  • We worked with influencers (the speakers themselves) to trigger branded searches and direct traffic (remember: Chrome stores every URL we visit);
  • We did outreach and published guest posts about the event on sites visited by our audience (and recorded in its search history).

As a result, right now The Inbounder occupies all the first page of Google for its brand name and, more importantly in semantics terms, Google presents The Inbounder events as suggested and related searches. It associates it with all the searches I could ever want:

Another example is Trivago and its global TV advertising campaigns:

Trivago was very smart in constantly showing “Trivago” and “hotel” in the same phrase, even making their motto “Hotel? Trivago.”

This is a simple psychological trick for creating word associations.

As a result, people searched on Google for “hotel Trivago” (or “Trivago hotel”), especially just after the ads were broadcasted:

One of the results is that now, Google suggests “hotel Trivago” when we start typing “hotel” and, as in the case of The Inbounder, it presents “hotel Trivago” as a related search:

Wake up SEOs, the new new Google is here

Yes, it is. And it’s all about better understanding web documents and queries in order to provide the best answers to its users (and make money in the meantime).

To achieve this objective, ideally becoming the long-desired “Star Trek computer,” Google is investing money, people, and efforts into machine/deep learning, neural networks, semantics, search behavior, context analysis, and personalized search.

Remember, SEO is no longer just about “200 ranking factors.” SEO is about making our websites become the sources Google cannot help but use for answering queries.

This is exactly why semantic search is of utmost importance and not just something worth the attention of a few geeks passionate about linguistics, computer science, and patents.

Work on parsing and indexing optimization now, seriously implement semantic search in your SEO strategy, take advantage of the opportunities personalized search offers you, and always put users at the center of everything you do.

In doing so you’ll build a solid foundation for your success in the years to come, both via classic search and with Google Assistant/Now.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Related Articles

Posted in Latest NewsComments Off

Let Data Take the Wheel – Using API-Integrated Reporting Dashboards

Posted by IanWatson

Some say the only constant thing in this world is change — and that seems to go double for the online marketing and SEO industry. At times this can seem daunting and sometimes insurmountable, but some have found ways to embrace the ambiguity and even thrive on it. Their paths and techniques may all differ slightly, but a commonality exists among them.

That commonality is the utilization of data, mainly via API-driven custom tools and dashboards. APIs like Salesforce’s Chatter, Facebook’s Graph, and our very own Mozscape all allow for massive amounts of useful data to be integrated into your systems.

So, what do you do with all that data?

The use cases are limitless and really depend on your goals, business model, and available resources. Many in our industry, including myself, still rely heavily upon spreadsheets to manage large data sets.

However, the amount of native data and data within reach has grown drastically, and can quickly become unwieldy.

An example of a live reporting dashboard from Klipfolio.

Technology to the rescue!

Business intelligence (BI) is a necessary cog in the machine when it comes to running a successful business. The first step to incorporating BI into your business strategy is to adopt real-time reporting. Much like using Google Maps (yet another API!) on your phone to find your way to a new destination, data visualization companies like Klipfolio, Domo, and Tableau have built live reporting dashboards to help you navigate the wild world of online marketing. These interactive dashboards allow you in integrate data from several sources to better assist you in making real-time decisions.

A basic advertising dashboard.

For example, you could bring your ad campaign, social, and web analytics data into one place and track key metrics and overall performance in real-time. This would allow you to delegate extra resources towards what’s performing best, pulling resources from lagging activities in the funnel as they are occurring. Or perhaps you want to be ahead of the curve and integrate some deep learning into your analysis? Bringing in an API like Alchemy or a custom set-up from Algorithmia could help determine what the next trends are before they even happen. This is where the business world is heading; you don’t want to fall behind.

Resistance is futile.

The possibilities of real-time data analysis are numerous, and the first step towards embracing this new-age necessity is to get your first, simple dashboard set up. We’re here to help. In fact, our friends at Klipfolio were nice enough to give us step-by-step instructions on integrating our Mozscape data, Hubspot data, and social media metrics into their live reporting dashboard — even providing a live demo reporting dashboard. This type of dash allows you to easily create reports, visualize changes in your metrics, and make educated decisions based on hard data.

Create a live reporting dashboard featuring Moz, Hubspot and social data

1. First, you’ll need to create your Mozscape API key. You’ll need to be logged into your existing Moz account, or create a free community or pro Moz account. Once you’re logged in and on the API key page, press “Generate Key.”

2. This is the key you’ll use to access the API and is essentially your password. This is also the key you’ll use for step 6, when you’re integrating this data into Klipfolio.

3. Create a free 14-day Klipfolio trial. Then select “Add a Klip.”

4. The Klip Gallery contains pre-built widgets for your whatever your favorite services might be. You can find Klips for Facebook, Instagram, Alexa, Adobe, Google Adwords and Analytics, and a bunch of other useful integrations. They’re constantly adding more. Plus, in Klipfolio, you can build your own widgets from scratch.

For now, let’s keep it simple. Select “Moz” in the Klip Gallery.

5. Pick the Klip you’d like to add first, then click “Add to Dashboard.”

6. Enter your API key and secret key. If you don’t have one already, you can get your API key and secret ID here.

7. Enter your company URL, followed by your competitors’ URLs.

8. Voilà — it’s that easy! Just like that, you have a live look at backlinks on your own dash.

9. From here, you can add any other Moz widgets you want by repeating steps 5–8. I chose to add in MozRank and Domain Authority Klips.

10. Now let’s add some social data streams onto our dash. I’m going to use Facebook and Twitter, but each of the main social media sites have similar setup processes.

11. Adding in other data sources like Hubspot, Searchmetrics, or Google Analytics simply requires you to bet set up with those parties and to allow Klipfolio access.

12. Now that we have our Klips set up, the only thing left to do is arrange the layout to your liking.

After you have your preferred layout, you’re all set! You’ve now entered the world of business intelligence with your first real-time reporting dashboard. After the free Klipfolio trial is complete, it’s only $ 20/month to continue reporting like the pros. I haven’t found many free tools in this arena, but this plan is about as close as you’ll come.

Take a look at a live demo reporting dash, featuring all of the sources we just went over:

Click to see a larger version.

Conclusion

Just like that, you’ve joined the ranks of Big SEO, reporting like the big industry players. In future posts we’ll bring you more tutorials on building simple tools, utilizing data, and mashing it up with outside sources to better help you navigate the ever-changing world of online business. There’s no denying that, as SEO and marketing professionals, you’re always looking for that next great innovation to give you and your customers a competitive advantage.

From Netflix transitioning into an API-centric business to Amazon diving into the API management industry, the largest and most influential companies out there realize that utilizing large data sets via APIs is the future. Follow suit: Let big data and business intelligence be your guiding light!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Advert