Tag Archive | "Checklist"

Enterprise Local SEO is Different: A Checklist, a Mindset

Posted by MiriamEllis

Image credit: Abraham Williams

If you’re marketing big brands with hundreds or thousands of locations, are you certain you’re getting model-appropriate local SEO information from your favorite industry sources?

Is your enterprise checking off not just technical basics, but hyperlocalized research to strengthen its entrance into new markets?

Before I started working for Moz in in 2010, the bulk of my local SEO experience had been with small-to-medium business models. Naturally, the advice I was able to offer back then was limited by the scope of my work. But then came Moz Local, and the opportunity to learn more about the more complex needs of valued enterprise customers like Crate & Barrel with more than 170 locations, PAPYRUS with 400, or Bridgestone Corporation with 2000+.

Now, when I’m thumbing through industry tips and tactics, I’m better able to identify when a recommended practice is stemming from an SMB mindset and falling short of enterprise realities, or is truly applicable to all business models. My goal for this post is to offer:

  • Examples of commonly encountered advice that isn’t really best for big brands
  • An Enterprise Local SEO Checklist to help you shape strategy for present campaigns, or ready your agency to pursue relationships with bigger dream clients
  • A state-to-enterprise wireframe for initial hyperlocal marketing research

Not everything you read is for enterprises

When a brand is small, like a single location, family-owned retail shop, it’s likely that a single person at the company can manage the business’ Local SEO, with some free education and a few helpful tools. Large, multi-location brands, just by dint of organizational complexities, are different. Before they even get down to the nitty gritty of building citations, enterprises have to solve for:

  • Standardizing data across hundreds or thousands of locations
  • Franchise relationships that can muddy who controls which data and assets
  • Designating staff to actually manage data and execute initiatives, and building bridges between teams that must work in concert to meet goals
  • Scaling everything from listings management, to site architecture, to content dev
  • Dealing with a hierarchy of reports of bad data from the retail location level up to corporate

I am barely scratching the surface here. In a nutshell, the scale of the organization and the scope of the multi-location brand can turn a task that would be simple for Mom-and-Pop into a major, company-wide challenge. And I think it adds to the challenge when published advice for SMBs isn’t labeled as such. Over the years, three common tips I’ve encountered with questionable or no applicability to enterprises include:

Not-for-enterprises #1: Link all your local business listings to your homepage

This is sometimes offered as a suggestion to boost local rankings, because website home pages typically have more authority than location landing pages do. But in the enterprise scenario, sending a consumer from a listing for his chosen location, to a homepage, and then expecting him to fool around with a menu or a store locator widget to finally reach a landing page for the location he’s already designated that he wanted is not respecting his user experience. It’s wasting his time. I consider this an unnecessary risk of conversions.

Simultaneously, failure to fully utilize location landing pages means that very little can be done to customize the website experience for each community and customer. Directly-linked-to landing pages can provide instant, persuasive proofs of local-ness, in the form of real local reviews, news about local sponsorships and events, special offers, regional product highlights, imagery and so much more that no corporate homepage can ever provide. Consider these statistics:

“According to a new study, when both brand and location-specific pages exist, 85% of all consumer engagement takes place on the local pages (e.g., Facebook Local Pages, local landing pages). A minority of impressions and engagement (15%) happen on national or brand pages.Local Search Association

In the large, multi-location scenario, it just isn’t putting the customer first to swap out a hoped-for ranking increase for a considerate, well-planned user experience.

Not-for-enterprises #2: Local business listings are a one-and-done deal

I find this advice particularly concerning. I don’t consider it true even for SMBs, and at the enterprise level, it’s simply false. It’s my guess that this suggestion stems from imagining a single local business. They create their Google My Business listing and build out perhaps 20–50 structured citations with good data. What could go wrong?

For starters, they may have forgotten that their business name was different 10 years ago. Oh, and they did move across town 5 years ago. And this old data is sitting somewhere in a major aggregator like Acxiom, and somehow due to the infamous vagaries of data flow, it ends up on Bing, and a Bing user gets confused and reports to Google that the new address is wrong on the GMB listing … and so on and so on. Between data flow and crowdsourced editing, a set-and-forget approach to local business listings is trouble waiting to happen.

Now multiply this by 1,000 business locations. And throw in that the enterprise opened two new stores yesterday and closed one. And that they just acquired a new chain and have to rebrand all its assets. And there seems to be something the matter with the phone number on 25 listings, because they’re getting agitated complaints at corporate. And they received 500 reviews last week on Google alone that have to be managed, and it seems one of their competitors is leaving them negative reviews. Whoa – there are 700 duplicate listings being reported by Moz Local! And the brand has 250 Google Questions & Answers queries to respond to this week. And someone just uploaded an image of a dumpster to their GMB listing in Santa Fe…

Not only do listings have to be built, they have to be monitored for data degradation, and managed for inevitable business events, responsiveness to consumers, and spam. It’s hard enough for SMBs to pull all of this off, but enterprises ignore this at their peril!

Not-for-enterprises #3: Just do X

Every time a new local search feature or best practice emerges, you’ll find publications saying “just do X” to implement. What I’ve learned from enterprises is that there is no “just” about it.

Case in point: in 2017, Google rolled out Google Posts, and as Joel Headley of healthcare practice growth platform PatientPop explained to me in a recent interview, his company had to quickly develop a solution that would enable thousands of customers to utilize this influential feature across hundreds of thousands of listings. PatientPop managed implementation in an astonishingly short time, but typically, at the enterprise level, each new rollout requires countless steps up and down the ladder. These could include achieving recognition of the new opportunity, approval to pursue it, designation of teams to work on it, possible acquisition of new assets to accomplish goals, implementation at scale, and the groundwork of tracking outcomes so that they can be reported to prove/disprove ROI from the effort.

Where small businesses can be relatively agile if they can find time to man-up to new features and strategies, enterprises can become dangerously bogged down by infrastructure and communications gaps. Even something as simple as hyperlocalizing content to the needs of a given community represents a significant undertaking.

The family-owned local hardware store already knows that the county fair is the biggest annual event in their area, and they’ve already got everything necessary to participate with a booth, run a contest, take photos, sponsor the tractor pull, earn links, and blog about it. For the hardware franchise with 3,000 stores, branch-to-corporate communication of the mere existence of the county fair, let alone gaining permission to market around it, will require multiple touches from the location to C-suites, and back again.

Checklist for enterprise local SEO preparedness

If you’re on the marketing team for an enterprise, or you run an agency and want to begin working with these larger, rewarding clients, you’ll be striving to put a checkmark in every box on the following checklist:

☑ Definition of success

We’ve determined which actions = success for our brand, whether this is increases for in-store traffic, sales, phone calls, bookings, or some other metric. When we see growth in these KPIs, it will affirm for us that our efforts are creating real success.

☑ Designation of roles

We’ve defined who will be responsible for all tasks relating to the local search marketing of our business. We’ve equipped these team members with all necessary permissions, granted access to key documentation, have organized workflows, and have created an environment for documentation of work.

☑ Canonical data

We’ve created a spreadsheet, approved and agreed upon by all major departments, that lists the standardized name, address, phone number, website URL, and hours of operation for each location of the company. Any variant information has been resolved into a single, agreed-upon data set for each location. This sheet has been shared with all stakeholders managing our local business listings, marketing, website and social outreach.

☑ Website optimization

Our keyword research findings are reflected in the tags and text of our website, including image optimization. Complete contact information for each of our locations is easily accessible on the site and is accurate. We’ve implemented proper markup, such as Schema or JSON-LD, to ensure that our data is as clear as possible to search engines.

☑ Website quality

Our website is easy to navigate and provides a good, usable experience for desktop, mobile and tablet users. We understand that the omni-channel search environment includes ambient search in cars, in homes, via voice. Our website doesn’t rely on technologies that exclude search engines or consumers. We’re putting our customer first.

☑ Tracking and analysis

We’ve implemented maximum controls for tracking and analyzing traffic to our website. We’re also ready to track and analyze other forms of marketing, such as clicks stemming from our Google My Business listings traffic being driven to our website by articles on third party sources, and content we’re sharing via social media.

☑ Publishing strategy

Our website features strong basic pages (Home, Contact, About, Testimonials/Reviews, Policy), we’ve built an excellent, optimized page for each of our core products/services and a quality, unique page for each of our locations. We have a clear strategy as to ongoing content publication, in the form of blog posts, white papers, case studies, social outreach, and other forms of content. We have plans for hyperlocalizing content to match regional culture and needs.

☑ Store locator

We’ve implemented a store locator widget to connect our website’s users to the set of location landing pages we’ve built to thoughtfully meet the needs of specific communities. We’ve also created an HTML version of a menu linking to all of these landing pages to ensure search engines can discover and index them.

☑ Local link building

We’re building the authority of our brand via the links we earn from the most authoritative sources. We’re actively seeking intelligent link building opportunities for each of our locations, reflective of our industry, but also of each branch’s unique geography.

☑ Guideline compliance

We’ve assessed that each of the locations our business plans to build local listings for complies with the Guidelines for Representing Your Business on Google. Each location is a genuine physical location (not a virtual office or PO box) and conducts face-to-face business with consumers, either at our locations or at customers’ locations. We’re compliant with Google’s rules for the naming of each location, and, if appropriate, we understand how to handle listing multi-department and multi-practitioner businesses. None of our Google My Business listings is at risk for suspension due to basic guideline violations. We’ve learned how to avoid every possible local SEO pitfall.

☑ Full Google My Business engagement

We’re making maximum use of all available Google My Business features that can assist us in achieving our goals. This could include Google Posts, Questions & Answers, Reviews, Photos, Messaging, Booking, Local Service Ads, and other emerging features.

☑ Local listing development

We’re using software like Moz Local to scale creation of our local listings on the major aggregators (Infogroup, Acxiom, Localeze and Factual) as well as key directories like Superpages and Citysearch. We’re confident that our accurate, consistent data is being distributed to these most important platforms.

☑ Local listing monitoring

We know that local listings aren’t a set-and-forget asset and are taking advantage of the ongoing monitoring SaaS provides, increasing our confidence in the continued accuracy of our data. We’re aware that, if left unmanaged, local business listing data can degrade over time, due to inputs from various, non-authoritative third parties as well as normal data flow across platforms.

☑ In-store strategy

All public-facing staff are equipped with the necessary training to implement our brand’s customer service policy, answer FAQs or escalate them via a clear hierarchy, resolving complaints before they become negative online reviews. We have installed in-store signage or other materials to actively invite consumer complaints in-person, via an after-hours helpline or text message to ensure we are making maximum effort to build and defend our strong reputation.

☑ Review acquisition

We’ve developed a clear strategy for acquiring reviews on an ongoing basis on the review sites we’ve deemed to be most important to our brand. We’re compliant with the guidelines of each platform on which we’re earning reviews. We’re building website-based reviews and testimonials, too.

☑ Review monitoring & response

We’re monitoring all incoming reviews to identify both positive and negative emerging sentiment trends at specific locations and we’re conversant with Net Promoter Score. We’ve created a process for responding with gratitude to positive reviews. We’re defending our reputation and revenue by responding to negative reviews in ways that keep customers who complain instead of losing them, to avoid needless drain of new customer acquisition spend. Our responses are building a positive impression of our brand. We’ve built or acquired solutions to manage reviews at scale.

☑ Local PR

Each location of our brand has been empowered to build a local footprint in the community it serves, customizing outreach to match community culture. We’re exploring sponsorships, scholarships, workshops, conferences, news opportunities, and other forms of participation that will build our brand via online links and social mentions as well as offline WOM marketing. We’re continuously developing cohesive online/offline outreach for maximum impact on brand recognition, rankings, reputation, and revenue.

☑ Social media

We’ve identified the social platforms that are most popular with our consumer base and a best fit for our brand. We’re practicing ongoing social listening to catch and address positive and negative sentiment trends as they arise. We’ve committed to a social mindset based on sharing rather than the hard sell.

☑ Spam-ready

We’re aware that our brand, our listings, and our reviews may be subject to spam, and we know what options are available for reporting it. We’re also prepared to detect when the spammy behaviors of competitors (such as fake addresses, fake negative/positive reviews, or keyword stuffing of listings) are giving them an unfair advantage in our markets, and have a methodology for escalating reports of guideline violations.

☑ Paid media

We’re investing wisely in both on-and-offline paid media and carefully tracking and analyzing the outcomes of online pay-per-click, radio, TV, billboards, and phone sales strategy. We’re exploring new opportunities, as appropriate and as they emerge, like Google Local Service Ads.

☑ Build/buy

When any new functionality (like Google Posts or Google Q&A) needs to be managed at scale, we have a process for determining whether we need to build or acquire new technology. We know we have to weigh the pros/cons of developing in-house or buying ready-made solutions.

☑ Competitive difference-maker

Once you’ve checked off all of the above elements, you’re ready to move forward towards identifying a USP for your brand that no one else in your market has explored. Be it a tool, widget, app, video marketing campaign, newsworthy acquisition, new partnership, or some other asset, this venture will require deep competitive and market research to discover a need that has yet to be filled well by your competitors. If your business can serve this need, it can set your brand apart for years to come.

Free advice, specifically for local enterprises

It’s asserted that customers may forget what you say, but they’ll never forget how you make them feel.

Call me a Californian, but I continue to be amazed by automotive TV spots that show large trucks driving through beautiful creeks (thanks for tearing up precious riparian habitat during our state-wide drought) and across pristine arctic snowfields (instantly reminding me of climate change). Meanwhile, my family have become Tesla-spotters, seeing that “zero emissions” messaging on the tail of every luxury eco-vehicle that passes us by. As consumers, we know how we feel.

Technical and organizational considerations aside, this is where I see one of the greatest risks posed to the local enterprise structure. Insensitivity at a regional or hyperlocal level — the failure to research customer needs with the intention of meeting them — has been responsible for some of the most startling bad news for enterprises in recent recall. From ignored negative reviews across fast food franchises, to the downsizing of multiple apparel retailers who have been unable to stake a clear claim in the shifting shopping environment, brands that aren’t successful at generating positive consumer “feelings” may need to reevaluate not just their local search marketing mindset, but their basic identity.

If this sounds uncomfortable or risky, consider that we are seeing a rising trend in CEOs taking stands on issues of national import in America. This is about feelings. Consumers are coming to expect this, and it feeds down to the local level.

Hyperlocalized market research

If your brand is considering opening a new branch in a new state or city, you’ll be creating profiles as part of your research. These could be based on everything from reading local news to conducting formal surveys. If I were to do something like this for my part of California, these are the factors I’d be highlighting about the region:

California

Enterprises

We’ve been blasted by drought and wildfire. In 2017, alone, we went through 9,133 fires. On a positive note, Indigenous thought-leadership is beginning to be re-implemented in some areas to solve our worst ecological problems (water scarcity, salmon loss, absence of traditional forestry practices).

Can your brand help conserve water, re-house thousands of homeless residents, fund mental health services despite budget cuts, make legal services affordable, provide solutions for increased future safety? What are your green practices? Are you helping to forward ecological recovery efforts at a tribal, city or state level?

We’re grumbling more loudly about tech gentrification. If you live in Mississippi, sit down for this. The average home price in your state is $ 199,028. In my part of California, it’s $ 825,000. In San Francisco, specifically, you’ll need $ 1.2 million dollars to buy a tiny studio apartment… if you can find one. While causes are complex, people I talk with generally blame Silicon Valley.

Can your brand be part of this conversation? If not, you’re not really addressing what is on statewide consumers’ minds. Particularly if you’re marketing a tech-oriented company, taking the housing crisis seriously and coming up with solutions for even a modest amount of relief would certainly be positive and newsworthy.

We’ve turned to online shopping for an interesting variety of reasons. And it’s not just because we’re techie hipsters. The retail inventory in big cities (San Francisco) can be overwhelming to sort through, and in small towns (Cloverdale), the shopping options are too few to meet our basic and luxury desires.

Can your brand thrive in the gaps? If you’re located in a metro area, you may need to offer personal assistance to help consumers filter through options. If you’ve got a location somewhere near small towns, strategies like same-day delivery could help you remain competitive.

We’ve got our Hispanic/Latino identity back. Our architecture, city and street names are daily reminders that California has a lot more to do with Mexico than it ever did with the Mayflower. We may have become part of the U.S. in 1850, but pay more attention to 2014 — the year that our Hispanic/Latino community became the state’s largest ethnic group. This is one of the most vibrant happenings here. At the same time, our governor has declared us a sanctuary state for immigrants, and we’re being sued for it by the Justice Department.

Can your brand celebrate our state’s diversity? If you’re doing business in California today, you’ll need bilingual marketing, staff, and in-store amenities. Pew Research publishes ongoing data about the Hispanic/Latino segment of our population. What is your brand doing to ensure that these customers feel truly served?

We’re politically diverse. Our single state is roughly the same size as Sweden, and we truly do run the political gamut from A–Z here. Are citizens removing a man-made dam heroically restoring ecology or getting in the way of commerce? You’ll find voices on every side.

Can your brand take the risk of publicizing its honest core values? If so, you are guaranteed to win and lose Californian customers, so do your research and be prepared to own your stance. Know that at a regional level, communities differ greatly. Those TV ads that show trucks running roughshod through fragile ecosystems may fly in some cities and be viewed with extreme distaste in others.

Money is top of mind. More than ⅓ of Californians have zero savings. Over½ of the citizens have less than $ 1000 in savings. We invest more in Welfare than the next two states combined. And while our state has the highest proportion of resident billionaires, they are vastly outnumbered by citizens who are continuously anxious about struggling to get by. Purchasing decisions are seldom easy.

Can your brand employ a significant number of residents and pay them a living wage? Could your entry into a new market lift poverty in a town and provide better financial security? This would be newsworthy! Have ideas for lowering prices? You’ll get some attention there, too.

Obviously, I’m painting with broad strokes here, just touching on some of the key points that your enterprise would need to consider in determining to commence operations in any city or state. Why does this matter? Because the hyperlocalization of marketing is on the rise, and to engage with a community, you must first understand it.

Every month, I see businesses shutter because someone failed to apprehend true local demand. Did that bank pick a good location for a new branch? Yes — the next branch is on the other side of the city. Will the new location of the taco franchise remain open? No — it’s already sitting empty while the beloved taco wagon down the street has a line that spills out of its parking lot all night long.

Summing up

“What helps people, helps business.” - Leo Burnett

The checklist in this post can help you create an enterprise-appropriate strategy for well-organized local search marketing, and it’s my hope that you’ll evaluate all SEO advice for its fitness to your model. These are the basic necessities. But where you go from there is the exciting part. The creative solutions you find to meet the specific wants and needs of individualized service communities could spell out the longevity of your brand’s success.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

The Pro Marketer’s Product Launch Checklist for 2018 – Whiteboard Friday

Posted by randfish

What goes into a truly exceptional product launch? To give your new product a feature the best chance at success, it’s important to wrangle all the many moving pieces involved in pulling off a seamless marketing launch. From listing audience members and influencers to having the right success metrics to having a rollback plan, Rand shares his best advice in the form of an actionable checklist in this Whiteboard Friday. And make sure to check out the last item — it may be the best one to start with!

The Pro Marketer's Product Launch Checklist 2018

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we are chatting about crafting a professional marketer’s product launch checklist for 2018.

So many of you are undoubtedly in the business of doing things around SEO and around web marketing, around content marketing, around social media marketing in service of a product that you are launching or a feature that you are launching or multiple products. I think it pays for us to examine what goes into a very successful product launch.

Of course, I’ve been a part of many of these at Moz, as part of many of the startups and other companies that I advise, and there are some shared characteristics, particularly from the marketing perspective. I won’t focus on the product and engineering perspectives. We’ll talk about marketing product launches today.

☑ A defined audience, accompanied by a list of 10–100 real, individual people in the target group

So to start with, very first, top of our list, a defined audience. That can be a demographic or a psychographic set of characteristics that define your audience or a topic, a niche, a job title or job function type of characteristics that comprise the profile of who’s in your group. That should be accompanied by a list of 10 to 100 real people.

I know that many marketers out there love using personas, and I think it’s fine to use personas to help define this audience. But I’m going to urge you strongly to have that real list. Those could be:

  • Customers that you know you’re targeting,
  • People who have bought from you in the past and you’re hoping will buy again,
  • People who maybe you’ve lost and are hoping to recapture, maybe they use a competitor’s product today or they’re notable in some way.

As long as they fit your characteristics, I want you to have that list of those real people.

The problem with personas is you can’t talk to them. You can’t ask them real questions, or you can, but only in your own mind and your imagination fills in the details. These are real people that you can talk to, email, ask questions, show the product to, show the launch plan to and get real feedback. They should have shared characteristics. They should have an affinity for the product that you’re building or launching, hopefully, and they should share the problem.

Whatever the problem, almost every product, in fact, hopefully every product is actually trying to solve a problem better than the thing that came before it or the many things that came before it. Your audience should share whatever that problem is that you’re trying to solve.

☑ List of 25–500 influential people in the space, + contact info and an outreach plan

Okay. We’ll give this a nice check mark. Next, list of influential people in the space. That could be 25 to even hundreds or thousands of people potentially, plus their contact information and an outreach plan. That outreach plan should include why each target is going to care about the problem, about the solution, and why they’re going to share. Why will they amplify?

This is in answer to the question: Who will help amplify this and why? If you don’t have a great answer to that, your product launch will almost certainly fall flat from a marketing perspective. If you can build a successful one of these, that list, especially if before you even launch, you know that 20 of these 500 people have said, “Yes, I’m going to amplify. Here’s why I care about this. I can’t wait until you give me permission to share it or release this thing or send me the version of it.” That’s an awesome, awesome step.

☑ List of influential publications and media that influencers and target audience members consume

Next, similarly, just like we have a list of influential people, we want a list of influential publications and media that many influencers and many of your target audience members read, watch, subscribe to, listen to, follow, etc. So it’s basically these two groups should be paying attention to the media, to the publications that we’re trying to list out here. Essentially, that could be events that these people go to. It could be podcasts they listen to. It could be shows they watch, blogs or email newsletters they subscribe to. It could be traditional media, magazines, radio, YouTube channel. Whatever those publications are, all of them are the ones we’re trying to build a list of here.

That is going to be part of our outreach target. We might have these influential people, and some of these could overlap. Some of these influential people may work for or at these influential publications and that’s fine. I just worry that too much influencer marketing is focused on individuals and not on publications when, in fact, both are critical to a product launch success.

☑ Metrics for success

Metrics, yes, marketers need metrics for success. Those should be in three buckets — exposure and branding, which include things like press and mentions and social engagement, maybe a survey comparison of before and after. We ran an anonymous survey to a group of our target audience before and after and we measured brand awareness differential. Traffic, so links, rankings, visits, time on site, etc., and conversions. That could be measured through last touch or through preferably full-funnel attribution.

☑ Promotional schedule with work items by team member and rollback plan

A promotion schedule. So this means we actually know what we’re doing and in what order as the launch rolls out. That could be before launch we’re doing a bunch of things around private beta or around sharing with some of these influential people and publications. Or we haven’t defined the audience yet. We need to do that. We have that schedule and work items by each team member, and we’re going to need a rollback plan. So if at any point along the way, the person who owns the product process says, “This is not good enough,” or, “We have a fundamental error,” or, “The flamethrower we’re building shoots ice instead of fire,” we should probably either rename and rebrand it or roll it back. We have that structure set up.

☑ FAQ from the beta/test period, from both potential customers and influencers

Next, frequently asked questions. This is where a beta or test period and test users come in super handy, because they will have asked us a bunch of questions. They’ll have asked as they’re playing with or observing or using the product. We should be able to take all of those questions from both potential customers and from influencers, and we should have those answers set up for our customer service and help teams and for people who are interfacing with the press and with influencers in case they reach out.

In an ideal world, we would also publish these online. We would have a place where we could reference them. They’re already published. This is particularly handy when press and influencers cover a launch and they link to a, “Oh, here’s how the ice thrower,” I’m assuming, “that we’re building is meant to work, and here’s at what temperatures it’s safe to operate,” etc.

☑ Media assets & content for press/influencer use

Next up, media assets and content for those press and publications and influencer use. For example:

  • Videos of people using the product and playing with it
  • Screencasts, screenshots if it’s a digital or software product
  • Photos
  • Demo-able versions if you want to give people login access to something special
  • Guidelines for press usage and citations, as well as things like logo and style guide

All of those types of things. Trust me, if your product launch goes well, people will ask you for this, or they will just use things that they steal from your site. You would much prefer to be able to control these assets and to control where the links and citations point, especially from an SEO perspective.

☑ Paid promotion triggers, metrics to watch, and KPIs

Next up, penultimate on our checklist, paid promotion triggers. So most of the time, when you’re doing a product launch, there will also be some component that is non-organic, i.e., paid such as paid content. It could be pay-per-click ads. It could be Facebook advertising. It could be web advertising. It could be retargeting and remarketing. It could be broadcast advertising. All of those kinds of things.

You will want with each of those triggers, triggers that essentially say, “Okay, we’ve reached the point where we are now ready. We executed along our schedule, so we are now ready to turn on the paid promotion, and channel X is going to be the start of that, then channel Y and then channel Z.”

Then we should have KPIs, key performance indicators, that tell us whether we’re going to grow or shrink that spend, something like this. So we know, hey, the product launch is going this well, so we’re going to keep our current level investment. But if we tick up over here, we’re going to invest more. If we get to here, we’re going to max out our spend. We know that our maximum spend is X. Versus it goes the other way and over here, we’re going to cut. We’re going to cut all spend if we fall below metric Z.

☑ A great set of answers and 100% alignment on the following statement:

Last but not least on our checklist, this should exist even prior to a product design process. In fact, if you’re doing this at the end of a product launch checklist, the rest of this is not going to go so well. But if you start product design with this in mind and then maintain it all the way through launch, through messaging, through all the marketing that you do, you’re going to be in good shape. That is a great set of answers and 100% alignment, meaning everyone on the team, who’s working on this, agrees that this is how we’re going to position this on this statement.

Before the product we’re launching existed, our target audience, the group of people up here, was underserved in these ways or by previous solutions or because of these problems. But now, thanks to the thing that we’ve done, the thing that we’ve created and what is extraordinary about this product, these problems or this problem is solved.

If you design in this fashion and then you roll out in this fashion, you get this wonderful alignment and connection between how you’re branding and marketing the product and how the product was conceived and built. The problem and its solution become clear throughout. That tends to do very, very well for product building and product launching.

All right, everyone, if you have additions to this checklist, I hope you leave them in the comments below. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Find More Articles

Posted in Latest NewsComments Off

Email Clickthrough Rate: 9-point checklist to get more clicks for your email marketing by reducing perceived cost

A walk through our Email Click Cost Force Checklist, step-by-step
MarketingSherpa Blog

Posted in Latest NewsComments Off

Email Open Rates: 9-point checklist to get more opens for your email marketing by reducing perceived cost

Every decision you ask prospective customers to make has a perceived value to the customer as well as a perceived cost. This checklist will help you minimize the perceived cost of an email open to help you increase your brand’s email open rate.
MarketingSherpa Blog

Find More Articles

Posted in Latest NewsComments Off

The Website Migration Guide: SEO Strategy, Process, & Checklist

Posted by Modestos

What is a site migration?

A site migration is a term broadly used by SEO professionals to describe any event whereby a website undergoes substantial changes in areas that can significantly affect search engine visibility — typically substantial changes to the site structure, content, coding, site performance, or UX.

Google’s documentation on site migrations doesn’t cover them in great depth and downplays the fact that so often they result in significant traffic and revenue loss, which can last from a few weeks to several months — depending on the extent search engine ranking signals have been affected, as well as how long it may take the affected business to rollout a successful recovery plan.

Quick access links

Site migration examples
Site migration types
Common site migration pitfalls
Site migration process
1. Scope & planning
2. Pre-launch preparation
3. Pre-launch testing
4. Launch day actions
5. Post-launch testing
6. Performance review
Site migration checklist
Appendix: Useful tools


Site migration examples

The following section discusses how both successful and unsuccessful site migrations look and explains why it is 100% possible to come out of a site migration without suffering significant losses.

Debunking the “expected traffic drop” myth

Anyone who has been involved with a site migration has probably heard the widespread theory that it will result in de facto traffic and revenue loss. Even though this assertion holds some truth for some very specific cases (i.e. moving from an established domain to a brand new one) it shouldn’t be treated as gospel. It is entirely possible to migrate without losing any traffic or revenue; you can even enjoy significant growth right after launching a revamped website. However, this can only be achieved if every single step has been well-planned and executed.

Examples of unsuccessful site migrations

The following graph illustrates a big UK retailer’s botched site migration where the website lost 35% of its visibility two weeks after switching from HTTP to HTTPS. It took them about six months to fully recover, which must have had a significant impact on revenue from organic search. This is a typical example of a poor site migration, possibly caused by poor planning or implementation.

Example of a poor site migration — recovery took 6 months!

But recovery may not always be possible. The below visibility graph is from another big UK retailer, where the HTTP to HTTPS switchover resulted in a permanent 20% visibility loss.

Another example of a poor site migration — no signs of recovery 6 months on!

In fact, it’s is entirely possible to migrate from HTTP to HTTPS without losing so much traffic and for so such a long period, aside from the first few weeks where there is high volatility as Google discovers the new URLs and updates search results.

Examples of successful site migrations

What does a successful site migration look like? This largely depends on the site migration type, the objectives, and the KPIs (more details later). But in most cases, a successful site migration shows at least one of the following characteristics:

  1. Minimal visibility loss during the first few weeks (short-term goal)
  2. Visibility growth thereafter — depending on the type of migration (long-term goal)

The following visibility report is taken from an HTTP to HTTPS site migration, which was also accompanied by significant improvements to the site’s page loading times.

The following visibility report is from a complete site overhaul, which I was fortunate to be involved with several months in advance and supported during the strategy, planning, and testing phases, all of which were equally important.

As commonly occurs on site migration projects, the launch date had to be pushed back a few times due to the risks of launching the new site prematurely and before major technical obstacles were fully addressed. But as you can see on the below visibility graph, the wait was well worth it. Organic visibility not only didn’t drop (as most would normally expect) but in fact started growing from the first week.

Visibility growth one month after the migration reached 60%, whilst organic traffic growth two months post-launch exceeded 80%.

Example of a very successful site migration — instant growth following new site launch!

This was a rather complex migration as the new website was re-designed and built from scratch on a new platform with an improved site taxonomy that included new landing pages, an updated URL structure, lots of redirects to preserve link equity, plus a switchover from HTTP to HTTPS.

In general, introducing too many changes at the same time can be tricky because if something goes wrong, you’ll struggle to figure out what exactly is at fault. But at the same time, leaving major changes for a later time isn’t ideal either as it will require more resources. If you know what you’re doing, making multiple positive changes at once can be very cost-effective.

Before getting into the nitty-gritty of how you can turn a complex site migration project into a success, it’s important to run through the main site migration types as well as explain the main reason so many site migrations fail.


Site migration types

There are many site migration types. It all depends on the nature of the changes that take place to the legacy website.

Google’s documentation mostly covers migrations with site location changes, which are categorised as follows:

  • Site moves with URL changes
  • Site moves without URL changes

Site move migrations

URL-structure2.png

These typically occur when a site moves to a different URL due to any of the below:

Protocol change

A classic example is when migrating from HTTP to HTTPS.

Subdomain or subfolder change

Very common in international SEO where a business decides to move one or more ccTLDs into subdomains or subfolders. Another common example is where a mobile site that sits on a separate subdomain or subfolder becomes responsive and both desktop and mobile URLs are uniformed.

Domain name change

Commonly occurs when a business is rebranding and must move from one domain to another.

Top-level domain change

This is common when a business decides to launch international websites and needs to move from a ccTLD (country code top-level domain) to a gTLD (generic top-level domain) or vice versa, e.g. moving from .co.uk to .com, or moving from .com to .co.uk and so on.

Site structure changes

These are changes to the site architecture that usually affect the site’s internal linking and URL structure.

Other types of migrations

There are other types of migration which are triggered by changes to the site’s content, structure, design, or platform.

Replatforming

This is the case when a website is moved from one platform/CMS to another, e.g. migrating from WordPress to Magento or just upgrading to the latest platform version. Replatforming can, in some cases, also result in design and URL changes because of technical limitations that often occur when changing platforms. This is why replatforming migrations rarely result in a website that looks exactly the same as the previous one.

Content migrations

Major content changes such as content rewrites, content consolidation, or content pruning can have a big impact on a site’s organic search visibility, depending on the scale. These changes can often affect the site’s taxonomy, navigation, and internal linking.

Mobile setup changes

With so many options available for a site’s mobile setup moving, enabling app indexing, building an AMP site, or building a PWA website can also be considered as partial site migrations, especially when an existing mobile site is being replaced by an app, AMP, or PWA.

Structural changes

These are often caused by major changes to the site’s taxonomy that impact on the site navigation, internal linking and user journeys.

Site redesigns

These can vary from major design changes in the look and feel to a complete website revamp that may also include significant media, code, and copy changes.

Hybrid migrations

In addition to the above, there are several hybrid migration types that can be combined in practically any way possible. The more changes that get introduced at the same time the higher the complexity and the risks. Even though making too many changes at the same time increases the risks of something going wrong, it can be more cost-effective from a resources perspective if the migration is very well-planned and executed.


Common site migration pitfalls

Even though every site migration is different there are a few common themes behind the most typical site migration disasters, with the biggest being the following:

Poor strategy

Some site migrations are doomed to failure way before the new site is launched. A strategy that is built upon unclear and unrealistic objectives is much less likely to bring success.

Establishing measurable objectives is essential in order to measure the impact of the migration post-launch. For most site migrations, the primary objective should be the retention of the site’s current traffic and revenue levels. In certain cases the bar could be raised higher, but in general anticipating or forecasting growth should be a secondary objective. This will help avoid creating unrealistic expectations.

Poor planning

Coming up with a detailed project plan as early as possible will help avoid delays along the way. Factor in additional time and resources to cope with any unforeseen circumstances that may arise. No matter how well thought out and detailed your plan is, it’s highly unlikely everything will go as expected. Be flexible with your plan and accept the fact that there will almost certainly be delays. Map out all dependencies and make all stakeholders aware of them.

Avoid planning to launch the site near your seasonal peaks, because if anything goes wrong you won’t have enough time to rectify the issues. For instance, retailers should avoid launching a site close to September/October to avoid putting the busy pre-Christmas period at risk. In this case, it would be much wiser launching during the quieter summer months.

Lack of resources

Before committing to a site migration project, estimate the time and effort required to make it a success. If your budget is limited, make a call as to whether it is worth going ahead with a migration that is likely to fail in meeting its established objectives and cause revenue loss.

As a rule of thumb, try to include a buffer of at least 20% in additional resource than you initially think the project will require. This additional buffer will later allow you to quickly address any issues as soon as they arise, without jeopardizing success. If your resources are too tight or you start cutting corners at this early stage, the site migration will be at risk.

Lack of SEO/UX consultation

When changes are taking place on a website, every single decision needs to be weighted from both a UX and SEO standpoint. For instance, removing great amounts of content or links to improve UX may damage the site’s ability to target business-critical keywords or result in crawling and indexing issues. In either case, such changes could damage the site’s organic search visibility. On the other hand, having too much text copy and few images may have a negative impact on user engagement and damage the site’s conversions.

To avoid risks, appoint experienced SEO and UX consultants so they can discuss the potential consequences of every single change with key business stakeholders who understand the business intricacies better than anyone else. The pros and cons of each option need to be weighed before making any decision.

Late involvement

Site migrations can span several months, require great planning and enough time for testing. Seeking professional support late is very risky because crucial steps may have been missed.

Lack of testing

In addition to a great strategy and thoughtful plan, dedicate some time and effort for thorough testing before launching the site. It’s much more preferable to delay the launch if testing has identified critical issues rather than rushing a sketchy implementation into production. It goes without saying that you should not launch a website if it hasn’t been tested by both expert SEO and UX teams.

Attention to detail is also very important. Make sure that the developers are fully aware of the risks associated with poor implementation. Educating the developers about the direct impact of their work on a site’s traffic (and therefore revenue) can make a big difference.

Slow response to bug fixing

There will always be bugs to fix once the new site goes live. However, some bugs are more important than others and may need immediate attention. For instance, launching a new site only to find that search engine spiders have trouble crawling and indexing the site’s content would require an immediate fix. A slow response to major technical obstacles can sometimes be catastrophic and take a long time to recover from.

Underestimating scale

Business stakeholders often do not anticipate site migrations to be so time-consuming and resource-heavy. It’s not uncommon for senior stakeholders to demand that the new site launch on the planned-for day, regardless of whether it’s 100% ready or not. The motto “let’s launch ASAP and fix later” is a classic mistake. What most stakeholders are unaware of is that it can take just a few days for organic search visibility to tank, but recovery can take several months.

It is the responsibility of the consultant and project manager to educate clients, run them through all the different phases and scenarios, and explain what each one entails. Business stakeholders are then able to make more informed decisions and their expectations should be easier to manage.


Site migration process

The site migration process can be split into six main essential phases. They are all equally important and skipping any of the below tasks could hinder the migration’s success to varying extents.


Phase 1: Scope & Planning

Work out the project scope

Regardless of the reasons behind a site migration project, you need to be crystal clear about the objectives right from the beginning because these will help to set and manage expectations. Moving a site from HTTP to HTTPS is very different from going through a complete site overhaul, hence the two should have different objectives. In the first instance, the objective should be to retain the site’s traffic levels, whereas in the second you could potentially aim for growth.

A site migration is a great opportunity to address legacy issues. Including as many of these as possible in the project scope should be very cost-effective because addressing these issues post-launch will require significantly more resources.

However, in every case, identify the most critical aspects for the project to be successful. Identify all risks that could have a negative impact on the site’s visibility and consider which precautions to take. Ideally, prepare a few forecasting scenarios based on the different risks and growth opportunities. It goes without saying that the forecasting scenarios should be prepared by experienced site migration consultants.

Including as many stakeholders as possible at this early stage will help you acquire a deeper understanding of the biggest challenges and opportunities across divisions. Ask for feedback from your content, SEO, UX, and Analytics teams and put together a list of the biggest issues and opportunities. You then need to work out what the potential ROI of addressing each one of these would be. Finally, choose one of the available options based on your objectives and available resources, which will form your site migration strategy.

You should now be left with a prioritized list of activities which are expected to have a positive ROI, if implemented. These should then be communicated and discussed with all stakeholders, so you set realistic targets, agree on the project, scope and set the right expectations from the outset.

Prepare the project plan

Planning is equally important because site migrations can often be very complex projects that can easily span several months. During the planning phase, each task needs an owner (i.e. SEO consultant, UX consultant, content editor, web developer) and an expected delivery date. Any dependencies should be identified and included in the project plan so everyone is aware of any activities that cannot be fulfilled due to being dependent on others. For instance, the redirects cannot be tested unless the redirect mapping has been completed and the redirects have been implemented on staging.

The project plan should be shared with everyone involved as early as possible so there is enough time for discussions and clarifications. Each activity needs to be described in great detail, so that stakeholders are aware of what each task would entail. It goes without saying that flawless project management is necessary in order to organize and carry out the required activities according to the schedule.

A crucial part of the project plan is getting the anticipated launch date right. Ideally, the new site should be launched during a time when traffic is low. Again, avoid launching ahead of or during a peak period because the consequences could be devastating if things don’t go as expected. One thing to bear in mind is that as site migrations never go entirely to plan, a certain degree of flexibility will be required.


Phase 2: Pre-launch preparation

These include any activities that need to be carried out while the new site is still under development. By this point, the new site’s SEO requirements should have been captured already. You should be liaising with the designers and information architects, providing feedback on prototypes and wireframes well before the new site becomes available on a staging environment.

Wireframes review

Review the new site’s prototypes or wireframes before development commences. Reviewing the new site’s main templates can help identify both SEO and UX issues at an early stage. For example, you may find that large portions of content have been removed from the category pages, which should be instantly flagged. Or you may discover that some high traffic-driving pages no longer appear in the main navigation. Any radical changes in the design or copy of the pages should be thoroughly reviewed for potential SEO issues.

Preparing the technical SEO specifications

Once the prototypes and wireframes have been reviewed, prepare a detailed technical SEO specification. The objective of this vital document is to capture all the essential SEO requirements developers need to be aware of before working out the project’s scope in terms of work and costs. It’s during this stage that budgets are signed off on; if the SEO requirements aren’t included, it may be impossible to include them later down the line.

The technical SEO specification needs to be very detailed, yet written in such a way that developers can easily turn the requirements into actions. This isn’t a document to explain why something needs to be implemented, but how it should be implemented.

Make sure to include specific requirements that cover at least the following areas:

  • URL structure
  • Meta data (including dynamically generated default values)
  • Structured data
  • Canonicals and meta robots directives
  • Copy & headings
  • Main & secondary navigation
  • Internal linking (in any form)
  • Pagination
  • XML sitemap(s)
  • HTML sitemap
  • Hreflang (if there are international sites)
  • Mobile setup (including the app, AMP, or PWA site)
  • Redirects
  • Custom 404 page
  • JavaScript, CSS, and image files
  • Page loading times (for desktop & mobile)

The specification should also include areas of the CMS functionality that allows users to:

  • Specify custom URLs and override default ones
  • Update page titles
  • Update meta descriptions
  • Update any h1–h6 headings
  • Add or amend the default canonical tag
  • Set the meta robots attributes to index/noindex/follow/nofollow
  • Add or edit the alt text of each image
  • Include Open Graph fields for description, URL, image, type, sitename
  • Include Twitter Open Graph fields for card, URL, title, description, image
  • Bulk upload or amend redirects
  • Update the robots.txt file

It is also important to make sure that when updating a particular attribute (e.g. an h1), other elements are not affected (i.e. the page title or any navigation menus).

Identifying priority pages

One of the biggest challenges with site migrations is that the success will largely depend on the quantity and quality of pages that have been migrated. Therefore, it’s very important to make sure that you focus on the pages that really matter. These are the pages that have been driving traffic to the legacy site, pages that have accrued links, pages that convert well, etc.

In order to do this, you need to:

  1. Crawl the legacy site
  2. Identify all indexable pages
  3. Identify top performing pages

How to crawl the legacy site

Crawl the old website so that you have a copy of all URLs, page titles, meta data, headers, redirects, broken links etc. Regardless of the crawler application of choice (see Appendix), make sure that the crawl isn’t too restrictive. Pay close attention to the crawler’s settings before crawling the legacy site and consider whether you should:

  • Ignore robots.txt (in case any vital parts are accidentally blocked)
  • Follow internal “nofollow” links (so the crawler reaches more pages)
  • Crawl all subdomains (depending on scope)
  • Crawl outside start folder (depending on scope)
  • Change the user agent to Googlebot (desktop)
  • Change the user agent to Googlebot (smartphone)

Pro tip: Keep a copy of the old site’s crawl data (in a file or on the cloud) for several months after the migration has been completed, just in case you ever need any of the old site’s data once the new site has gone live.

How to identify the indexable pages

Once the crawl is complete, work on identifying the legacy site’s indexed pages. These are any HTML pages with the following characteristics:

  • Return a 200 server response
  • Either do not have a canonical tag or have a self-referring canonical URL
  • Do not have a meta robots noindex
  • Aren’t excluded from the robots.txt file
  • Are internally linked from other pages (non-orphan pages)

The indexable pages are the only pages that have the potential to drive traffic to the site and therefore need to be prioritized for the purposes of your site migration. These are the pages worth optimizing (if they will exist on the new site) or redirecting (if they won’t exist on the new site).

How to identify the top performing pages

Once you’ve identified all indexable pages, you may have to carry out more work, especially if the legacy site consists of a large number of pages and optimizing or redirecting all of them is impossible due to time, resource, or technical constraints.

If this is the case, you should identify the legacy site’s top performing pages. This will help with the prioritization of the pages to focus on during the later stages.

It’s recommended to prepare a spreadsheet that includes the below fields:

  • Legacy URL (include only the indexable ones from the craw data)
  • Organic visits during the last 12 months (Analytics)
  • Revenue, conversions, and conversion rate during the last 12 months (Analytics)
  • Pageviews during the last 12 months (Analytics)
  • Number of clicks from the last 90 days (Search Console)
  • Top linked pages (Majestic SEO/Ahrefs)

With the above information in one place, it’s now much easier to identify your most important pages: the ones that generate organic visits, convert well, contribute to revenue, have a good number of referring domains linking to them, etc. These are the pages that you must focus on for a successful site migration.

The top performing pages should ideally also exist on the new site. If for any reason they don’t, they should be redirected to the most relevant page so that users requesting them do not land on 404 pages and the link equity they previously had remains on the site. If any of these pages cease to exist and aren’t properly redirected, your site’s rankings and traffic will negatively be affected.

Benchmarking

Once the launch of the new website is getting close, you should benchmark the legacy site’s performance. Benchmarking is essential, not only to compare the new site’s performance with the previous one but also to help diagnose which areas underperform on the new site and to quickly address them.

Keywords rank tracking

If you don’t track the site’s rankings frequently, you should do so just before the new site goes live. Otherwise, you will later struggle figuring out whether the migration has gone smoothly or where exactly things went wrong. Don’t leave this to the last minute in case something goes awry — a week in advance would be the ideal time.

Spend some time working out which keywords are most representative of the site’s organic search visibility and track them across desktop and mobile. Because monitoring thousands of head, mid-, and long-tail keyword combinations is usually unrealistic, the bare minimum you should monitor are keywords that are driving traffic to the site (keywords ranking on page one) and have decent search volume (head/mid-tail focus)

If you do get traffic from both brand and non-brand keywords, you should also decide which type of keywords to focus on more from a tracking POV. In general, non-brand keywords tend to be more competitive and volatile. For most sites it would make sense to focus mostly on these.

Don’t forget to track rankings across desktop and mobile. This will make it much easier to diagnose problems post-launch should there be performance issues on one device type. If you receive a high volume of traffic from more than one country, consider rank tracking keywords in other markets, too, because visibility and rankings can vary significantly from country to country.

Site performance

The new site’s page loading times can have a big impact on both traffic and sales. Several studies have shown that the longer a page takes to load, the higher the bounce rate. Unless the old site’s page loading times and site performance scores have been recorded, it will be very difficult to attribute any traffic or revenue loss to site performance related issues once the new site has gone live.

It’s recommended that you review all major page types using Google’s PageSpeed Insights and Lighthouse tools. You could use summary tables like the ones below to benchmark some of the most important performance metrics, which will be useful for comparisons once the new site goes live.

MOBILE

Speed

FCP

DCL

Optimization

Optimization score

Homepage

Fast

0.7s

1.4s

Good

81/100

Category page

Slow

1.8s

5.1s

Medium

78/100

Subcategory page

Average

0.9s

2.4s

Medium

69/100

Product page

Slow

1.9s

5.5s

Good

83/100

DESKTOP

Speed

FCP

DCL

Optimization

Optimization score

Homepage

Good

0.7s

1.4s

Average

81/100

Category page

Fast

0.6s

1.2s

Medium

78/100

Subcategory page

Fast

0.6s

1.3s

Medium

78/100

Product page

Good

0.8s

1.3s

Good

83/100

Old site crawl data

A few days before the new site replaces the old one, run a final crawl of the old site. Doing so could later prove invaluable, should there be any optimization issues on the new site. A final crawl will allow you to save vital information about the old site’s page titles, meta descriptions, h1–h6 headings, server status, canonical tags, noindex/nofollow pages, inlinks/outlinks, level, etc. Having all this information available could save you a lot of trouble if, say, the new site isn’t well optimized or suffers from technical misconfiguration issues. Try also to save a copy of the old site’s robots.txt and XML sitemaps in case you need these later.

Search Console data

Also consider exporting as much of the old site’s Search Console data as possible. These are only available for 90 days, and chances are that once the new site goes live the old site’s Search Console data will disappear sooner or later. Data worth exporting includes:

  • Search analytics queries & pages
  • Crawl errors
  • Blocked resources
  • Mobile usability issues
  • URL parameters
  • Structured data errors
  • Links to your site
  • Internal links
  • Index status

Redirects preparation

The redirects implementation is one of the most crucial activities during a site migration. If the legacy site’s URLs cease to exist and aren’t correctly redirected, the website’s rankings and visibility will simply tank.

Why are redirects important in site migrations?

Redirects are extremely important because they help both search engines and users find pages that may no longer exist, have been renamed, or moved to another location. From an SEO point of view, redirects help search engines discover and index a site’s new URLs quicker but also understand how the old site’s pages are associated with the new site’s pages. This association will allow for ranking signals to pass from the old pages to the new ones, so rankings are retained without being negatively affected.

What happens when redirects aren’t correctly implemented?

When redirects are poorly implemented, the consequences can be catastrophic. Users will either land on Not Found pages (404s) or irrelevant pages that do not meet the user intent. In either case, the site’s bounce and conversion rates will be negatively affected. The consequences for search engines can be equally catastrophic: they’ll be unable to associate the old site’s pages with those on the new site if the URLs aren’t identical. Ranking signals won’t be passed over from the old to the new site, which will result in ranking drops and organic search visibility loss. In addition, it will take search engines longer to discover and index the new site’s pages.

301, 302, JavaScript redirects, or meta refresh?

When the URLs between the old and new version of the site are different, use 301 (permanent) redirects. These will tell search engines to index the new URLs as well as forward any ranking signals from the old URLs to the new ones. Therefore, you must use 301 redirects if your site moves to/from another domain/subdomain, if you switch from HTTP to HTTPS, or if the site or parts of it have been restructured. Despite some of Google’s claims that 302 redirects pass PageRank, indexing the new URLs would be slower and ranking signals could take much longer to be passed on from the old to the new page.

302 (temporary) redirects should only be used in situations where a redirect does not need to live permanently and therefore indexing the new URL isn’t a priority. With 302 redirects, search engines will initially be reluctant to index the content of the redirect destination URL and pass any ranking signals to it. However, if the temporary redirects remain for a long period of time without being removed or updated, they could end up behaving similarly to permanent (301) redirects. Use 302 redirects when a redirect is likely to require updating or removal in the near future, as well as for any country-, language-, or device-specific redirects.

Meta refresh and JavaScript redirects should be avoided. Even though Google is getting better and better at crawling JavaScript, there are no guarantees these will get discovered or pass ranking signals to the new pages.

If you’d like to find out more about how Google deals with the different types of redirects, please refer to John Mueller’s post.

Redirect mapping process

If you are lucky enough to work on a migration that doesn’t involve URL changes, you could skip this section. Otherwise, read on to find out why any legacy pages that won’t be available on the same URL after the migration should be redirected.

The redirect mapping file is a spreadsheet that includes the following two columns:

  • Legacy site URL –> a page’s URL on the old site.
  • New site URL –> a page’s URL on the new site.

When mapping (redirecting) a page from the old to the new site, always try mapping it to the most relevant corresponding page. In cases where a relevant page doesn’t exist, avoid redirecting the page to the homepage. First and foremost, redirecting users to irrelevant pages results in a very poor user experience. Google has stated that redirecting pages “en masse” to irrelevant pages will be treated as soft 404s and because of this won’t be passing any SEO value. If you can’t find an equivalent page on the new site, try mapping it to its parent category page.

Once the mapping is complete, the file will need to be sent to the development team to create the redirects, so that these can be tested before launching the new site. The implementation of redirects is another part in the site migration cycle where things can often go wrong.

Increasing efficiencies during the redirect mapping process

Redirect mapping requires great attention to detail and needs to be carried out by experienced SEOs. The URL mapping on small sites could in theory be done by manually mapping each URL of the legacy site to a URL on the new site. But on large sites that consist of thousands or even hundreds of thousands of pages, manually mapping every single URL is practically impossible and automation needs to be introduced. Relying on certain common attributes between the legacy and new site can be a massive time-saver. Such attributes may include the page titles, H1 headings, or other unique page identifiers such as product codes, SKUs etc. Make sure the attributes you rely on for the redirect mapping are unique and not repeated across several pages; otherwise, you will end up with incorrect mapping.

Pro tip: Make sure the URL structure of the new site is 100% finalized on staging before you start working on the redirect mapping. There’s nothing riskier than mapping URLs that will be updated before the new site goes live. When URLs are updated after the redirect mapping is completed, you may have to deal with undesired situations upon launch, such as broken redirects, redirect chains, and redirect loops. A content-freeze should be placed on the old site well in advance of the migration date, so there is a cut-off point for new content being published on the old site. This will make sure that no pages will be missed from the redirect mapping and guarantee that all pages on the old site get redirected.

Don’t forget the legacy redirects!

You should get hold of the old site’s existing redirects to ensure they’re considered when preparing the redirect mapping for the new site. Unless you do this, it’s likely that the site’s current redirect file will get overwritten by the new one on the launch date. If this happens, all legacy redirects that were previously in place will cease to exist and the site may lose a decent amount of link equity, the extent of which will largely depend on the site’s volume of legacy redirects. For instance, a site that has undergone a few migrations in the past should have a good number of legacy redirects in place that you don’t want getting lost.

Ideally, preserve as many of the legacy redirects as possible, making sure these won’t cause any issues when combined with the new site’s redirects. It’s strongly recommended to eliminate any potential redirect chains at this early stage, which can easily be done by checking whether the same URL appears both as a “Legacy URL” and “New site URL” in the redirect mapping spreadsheet. If this is the case, you will need to update the “New site URL” accordingly.

Example:

URL A redirects to URL B (legacy redirect)

URL B redirects to URL C (new redirect)

Which results in the following redirect chain:

URL A –> URL B –> URL C

To eliminate this, amend the existing legacy redirect and create a new one so that:

URL A redirects to URL C (amended legacy redirect)

URL B redirects to URL C (new redirect)

Pro tip: Check your redirect mapping spreadsheet for redirect loops. These occur when the “Legacy URL” is identical to the “new site URL.” Redirect loops need to be removed because they result in infinitely loading pages that are inaccessible to users and search engines. Redirect loops must be eliminated because they are instant traffic, conversion, and ranking killers!

Implement blanket redirect rules to avoid duplicate content

It’s strongly recommended to try working out redirect rules that cover as many URL requests as possible. Implementing redirect rules on a web server is much more efficient than relying on numerous one-to-one redirects. If your redirect mapping document consists of a very large number of redirects that need to be implemented as one-to-one redirect rules, site performance could be negatively affected. In any case, double check with the development team the maximum number of redirects the web server can handle without issues.

In any case, there are some standard redirect rules that should be in place to avoid generating duplicate content issues:

Even if some of these standard redirect rules exist on the legacy website, do not assume they’ll necessarily exist on the new site unless they’re explicitly requested.

Avoid internal redirects

Try updating the site’s internal links so they don’t trigger internal redirects. Even though search engines can follow internal redirects, these are not recommended because they add additional latency to page loading times and could also have a negative impact on search engine crawl time.

Don’t forget your image files

If the site’s images have moved to a new location, Google recommends redirecting the old image URLs to the new image URLs to help Google discover and index the new images quicker. If it’s not easy to redirect all images, aim to redirect at least those image URLs that have accrued backlinks.


Phase 3: Pre-launch testing

The earlier you can start testing, the better. Certain things need to be fully implemented to be tested, but others don’t. For example, user journey issues could be identified from as early as the prototypes or wireframes design. Content-related issues between the old and new site or content inconsistencies (e.g. between the desktop and mobile site) could also be identified at an early stage. But the more technical components should only be tested once fully implemented — things like redirects, canonical tags, or XML sitemaps. The earlier issues get identified, the more likely it is that they’ll be addressed before launching the new site. Identifying certain types of issues at a later stage isn’t cost effective, would require more resources, and cause significant delays. Poor testing and not allowing the time required to thoroughly test all building blocks that can affect SEO and UX performance can have disastrous consequences soon after the new site has gone live.

Making sure search engines cannot access the staging/test site

Before making the new site available on a staging/testing environment, take some precautions that search engines do not index it. There are a few different ways to do this, each with different pros and cons.

Site available to specific IPs (most recommended)

Making the test site available only to specific (whitelisted) IP addresses is a very effective way to prevent search engines from crawling it. Anyone trying to access the test site’s URL won’t be able to see any content unless their IP has been whitelisted. The main advantage is that whitelisted users could easily access and crawl the site without any issues. The only downside is that third-party web-based tools (such as Google’s tools) cannot be used because of the IP restrictions.

Password protection

Password protecting the staging/test site is another way to keep search engine crawlers away, but this solution has two main downsides. Depending on the implementation, it may not be possible to crawl and test a password-protected website if the crawler application doesn’t make it past the login screen. The other downside: password-protected websites that use forms for authentication can be crawled using third-party applications, but there is a risk of causing severe and unexpected issues. This is because the crawler clicks on every link on a page (when you’re logged in) and could easily end up clicking on links that create or remove pages, install/uninstall plugins, etc.

Robots.txt blocking

Adding the following lines of code to the test site’s robots.txt file will prevent search engines from crawling the test site’s pages.

User-agent: *
Disallow: /

One downside of this method is that even though the content that appears on the test server won’t get indexed, the disallowed URLs may appear on Google’s search results. Another downside is that if the above robots.txt file moves into the live site, it will cause severe de-indexing issues. This is something I’ve encountered numerous times and for this reason I wouldn’t recommend using this method to block search engines.

User journey review

If the site has been redesigned or restructured, chances are that the user journeys will be affected to some extent. Reviewing the user journeys as early as possible and well before the new site launches is difficult due to the lack of user data. However, an experienced UX professional will be able to flag any concerns that could have a negative impact on the site’s conversion rate. Because A/B testing at this stage is hardly ever possible, it might be worth carrying out some user testing and try to get some feedback from real users. Unfortunately, user experience issues can be some of the harder ones to address because they may require sitewide changes that take a lot of time and effort.

On full site overhauls, not all UX decisions can always be backed up by data and many decisions will have to be based on best practice, past experience, and “gut feeling,” hence getting UX/CRO experts involved as early as possible could pay dividends later.

Site architecture review

A site migration is often a great opportunity to improve the site architecture. In other words, you have a great chance to reorganize your keyword targeted content and maximize its search traffic potential. Carrying out extensive keyword research will help identify the best possible category and subcategory pages so that users and search engines can get to any page on the site within a few clicks — the fewer the better, so you don’t end up with a very deep taxonomy.

Identifying new keywords with decent traffic potential and mapping them into new landing pages can make a big difference to the site’s organic traffic levels. On the other hand, enhancing the site architecture needs to be done thoughtfully. Itt could cause problems if, say, important pages move deeper into the new site architecture or there are too many similar pages optimized for the same keywords. Some of the most successful site migrations are the ones that allocate significant resources to enhance the site architecture.

Meta data & copy review

Make sure that the site’s page titles, meta descriptions, headings, and copy have been transferred from the old to the new site without issues. If you’ve created any new pages, make sure these are optimized and don’t target keywords that have already been targeted by other pages. If you’re re-platforming, be aware that the new platform may have different default values when new pages are being created. Launching the new site without properly optimized page titles or any kind of missing copy will have an immediate negative impact on your site’s rankings and traffic. Do not forget to review whether any user-generated content (i.e. user reviews, comments) has also been uploaded.

Internal linking review

Internal links are the backbone of a website. No matter how well optimized and structured the site’s copy is, it won’t be sufficient to succeed unless it’s supported by a flawless internal linking scheme. Internal links must be reviewed throughout the entire site, including links found in:

  • Main & secondary navigation
  • Header & footer links
  • Body content links
  • Pagination links
  • Horizontal links (related articles, similar products, etc)
  • Vertical links (e.g. breadcrumb navigation)
  • Cross-site links (e.g. links across international sites)

Technical checks

A series of technical checks must be carried out to make sure the new site’s technical setup is sound and to avoid coming across major technical glitches after the new site has gone live.

Robots.txt file review

Prepare the new site’s robots.txt file on the staging environment. This way you can test it for errors or omissions and avoid experiencing search engine crawl issues when the new site goes live. A classic mistake in site migrations is when the robots.txt file prevents search engine access using the following directive:

Disallow: /

If this gets accidentally carried over into the live site (and it often does), it will prevent search engines from crawling the site. And when search engines cannot crawl an indexed page, the keywords associated with the page will get demoted in the search results and eventually the page will get de-indexed.

But if the robots.txt file on staging is populated with the new site’s robots.txt directives, this mishap could be avoided.

When preparing the new site’s robots.txt file, make sure that:

  • It doesn’t block search engine access to pages that are intended to get indexed.
  • It doesn’t block any JavaScript or CSS resources search engines require to render page content.
  • The legacy site’s robots.txt file content has been reviewed and carried over if necessary.
  • It references the new XML sitemaps(s) rather than any legacy ones that no longer exist.

Canonical tags review

Review the site’s canonical tags. Look for pages that either do not have a canonical tag or have a canonical tag that is pointing to another URL and question whether this is intended. Don’t forget to crawl the canonical tags to find out whether they return a 200 server response. If they don’t you will need to update them to eliminate any 3xx, 4xx, or 5xx server responses. You should also look for pages that have a canonical tag pointing to another URL combined with a noindex directive, because these two are conflicting signals and you;’ll need to eliminate one of them.

Meta robots review

Once you’ve crawled the staging site, look for pages with the meta robots properties set to “noindex” or “nofollow.” If this is the case, review each one of them to make sure this is intentional and remove the “noindex” or “nofollow” directive if it isn’t.

XML sitemaps review

Prepare two different types of sitemaps: one that contains all the new site’s indexable pages, and another that includes all the old site’s indexable pages. The former will help make Google aware of the new site’s indexable URLs. The latter will help Google become aware of the redirects that are in place and the fact that some of the indexed URLs have moved to new locations, so that it can discover them and update search results quicker.

You should check each XML sitemap to make sure that:

  • It validates without issues
  • It is encoded as UTF-8
  • It does not contain more than 50,000 rows
  • Its size does not exceed 50MBs when uncompressed

If there are more than 50K rows or the file size exceeds 50MB, you must break the sitemap down into smaller ones. This prevents the server from becoming overloaded if Google requests the sitemap too frequently.

In addition, you must crawl each XML sitemap to make sure it only includes indexable URLs. Any non-indexable URLs should be excluded from the XML sitemaps, such as:

  • 3xx, 4xx, and 5xx pages (e.g. redirected, not found pages, bad requests, etc)
  • Soft 404s. These are pages with no content that return a 200 server response, instead of a 404.
  • Canonicalized pages (apart from self-referring canonical URLs)
  • Pages with a meta robots noindex directive
<!DOCTYPE html>
<html><head>
<meta name="robots" content="noindex" />
(…)
</head>
<body>(…)</body>
</html>
  • Pages with a noindex X-Robots-Tag in the HTTP header
HTTP/1.1 200 OK
Date: Tue, 10 Nov 2017 17:12:43 GMT
(…)
X-Robots-Tag: noindex
(…)
  • Pages blocked from the robots.txt file

Building clean XML sitemaps can help monitor the true indexing levels of the new site once it goes live. If you don’t, it will be very difficult to spot any indexing issues.

Pro tip: Download and open each XML sitemap in Excel to get a detailed overview of any additional attributes, such as hreflang or image attributes.

HTML sitemap review

Depending on the size and type of site that is being migrated, having an HTML sitemap can in certain cases be beneficial. An HTML sitemap that consists of URLs that aren’t linked from the site’s main navigation can significantly boost page discovery and indexing. However, avoid generating an HTML sitemap that includes too many URLs. If you do need to include thousands of URLs, consider building a segmented HTML sitemap.

The number of nested sitemaps as well as the maximum number of URLs you should include in each sitemap depends on the site’s authority. The more authoritative a website, the higher the number of nested sitemaps and URLs it could get away with.

For example, the NYTimes.com HTML sitemap consists of three levels, where each one includes over 1,000 URLs per sitemap. These nested HTML sitemaps aid search engine crawlers in discovering articles published since 1851 that otherwise would be difficult to discover and index, as not all of them would have been internally linked.

The NYTimes HTML sitemap (level 1)

The NYTimes HTML sitemap (level 2)

Structured data review

Errors in the structured data markup need to be identified early so there’s time to fix them before the new site goes live. Ideally, you should test every single page template (rather than every single page) using Google’s Structured Data Testing tool.

Be sure to check the markup on both the desktop and mobile pages, especially if the mobile website isn’t responsive.

Structured Data Testing Tool.png

The tool will only report any existing errors but not omissions. For example, if your product page template does not include the Product structured data schema, the tool won’t report any errors. So, in addition to checking for errors you should also make sure that each page template includes the appropriate structured data markup for its content type.

Please refer to Google’s documentation for the most up to date details on the structured data implementation and supported content types.

JavaScript crawling review

You must test every single page template of the new site to make sure Google will be able to crawl content that requires JavaScript parsing. If you’re able to use Google’s Fetch and Render tool on your staging site, you should definitely do so. Otherwise, carry out some manual tests, following Justin Brigg’s advice.

As Bartosz Góralewicz’s tests proved, even if Google is able to crawl and index JavaScript-generated content, it does not mean that it is able to crawl JavaScript content across all major JavaScript frameworks. The following table summarizes Bartosz’s findings, showing that some JavaScript frameworks are not SEO-friendly, with AngularJS currently being the most problematic of all.

Bartosz also found that other search engines (such as Bing, Yandex, and Baidu) really struggle with indexing JavaScript-generated content, which is important to know if your site’s traffic relies on any of these search engines.

Hopefully, this is something that will improve over time, but with the increasing popularity of JavaScript frameworks in web development, this must be high up on your checklist.

Finally, you should check whether any external resources are being blocked. Unfortunately, this isn’t something you can control 100% because many resources (such as JavaScript and CSS files) are hosted by third-party websites which may be blocking them via their own robots.txt files!

Again, the Fetch and Render tool can help diagnose this type of issue that, if left unresolved, could have a significant negative impact.

Mobile site SEO review

Assets blocking review

First, make sure that the robots.txt file isn’t accidentally blocking any JavaScript, CSS, or image files that are essential for the mobile site’s content to render. This could have a negative impact on how search engines render and index the mobile site’s page content, which in turn could negatively affect the mobile site’s search visibility and performance.

Mobile-first index review

In order to avoid any issues associated with Google’s mobile-first index, thoroughly review the mobile website and make there aren’t any inconsistencies between the desktop and mobile sites in the following areas:

  • Page titles
  • Meta descriptions
  • Headings
  • Copy
  • Canonical tags
  • Meta robots attributes (i.e. noindex, nofollow)
  • Internal links
  • Structured data

A responsive website should serve the same content, links, and markup across devices, and the above SEO attributes should be identical across the desktop and mobile websites.

In addition to the above, you must carry out a few further technical checks depending on the mobile site’s set up.

Responsive site review

A responsive website must serve all devices the same HTML code, which is adjusted (via the use of CSS) depending on the screen size.

Googlebot is able to automatically detect this mobile setup as long as it’s allowed to crawl the page and its assets. It’s therefore extremely important to make sure that Googlebot can access all essential assets, such as images, JavaScript, and CSS files.

To signal browsers that a page is responsive, a meta=”viewport” tag should be in place within the <head> of each HTML page.

<meta name="viewport" content="width=device-width, initial-scale=1.0">

If the meta viewport tag is missing, font sizes may appear in an inconsistent manner, which may cause Google to treat the page as not mobile-friendly.

Separate mobile URLs review

If the mobile website uses separate URLs from desktop, make sure that:

  1. Each desktop page has a tag pointing to the corresponding mobile URL.
  2. Each mobile page has a rel=”canonical” tag pointing to the corresponding desktop URL.
  3. When desktop URLs are requested on mobile devices, they’re redirected to the respective mobile URL.
  4. Redirects work across all mobile devices, including Android, iPhone, and Windows phones.
  5. There aren’t any irrelevant cross-links between the desktop and mobile pages. This means that internal links on found on a desktop page should only link to desktop pages and those found on a mobile page should only link to other mobile pages.
  6. The mobile URLs return a 200 server response.

Dynamic serving review

Dynamic serving websites serve different code to each device, but on the same URL.

On dynamic serving websites, review whether the vary HTTP header has been correctly set up. This is necessary because dynamic serving websites alter the HTML for mobile user agents and the vary HTTP header helps Googlebot discover the mobile content.

Mobile-friendliness review

Regardless of the mobile site set-up (responsive, separate URLs or dynamic serving), review the pages using a mobile user-agent and make sure that:

  1. The viewport has been set correctly. Using a fixed width viewport across devices will cause mobile usability issues.
  2. The font size isn’t too small.
  3. Touch elements (i.e. buttons, links) aren’t too close.
  4. There aren’t any intrusive interstitials, such as Ads, mailing list sign-up forms, App Download pop-ups etc. To avoid any issues, you should use either use a small HTML or image banner.
  5. Mobile pages aren’t too slow to load (see next section).

Google’s mobile-friendly test tool can help diagnose most of the above issues:

Google’s mobile-friendly test tool in action

AMP site review

If there is an AMP website and a desktop version of the site is available, make sure that:

  • Each non-AMP page (i.e. desktop, mobile) has a tag pointing to the corresponding AMP URL.
  • Each AMP page has a rel=”canonical” tag pointing to the corresponding desktop page.
  • Any AMP page that does not have a corresponding desktop URL has a self-referring canonical tag.

You should also make sure that the AMPs are valid. This can be tested using Google’s AMP Test Tool.

Mixed content errors

With Google pushing hard for sites to be fully secure and Chrome becoming the first browser to flag HTTP pages as not secure, aim to launch the new site on HTTPS, making sure all resources such as images, CSS and JavaScript files are requested over secure HTTPS connections.This is essential in order to avoid mixed content issues.

Mixed content occurs when a page that’s loaded over a secure HTTPS connection requests assets over insecure HTTP connections. Most browsers either block dangerous HTTP requests or just display warnings that hinder the user experience.

Mixed content errors in Chrome’s JavaScript Console

There are many ways to identify mixed content errors, including the use of crawler applications, Google’s Lighthouse, etc.

Image assets review

Google crawls images less frequently than HTML pages. If migrating a site’s images from one location to another (e.g. from your domain to a CDN), there are ways to aid Google in discovering the migrated images quicker. Building an image XML sitemap will help, but you also need to make sure that Googlebot can reach the site’s images when crawling the site. The tricky part with image indexing is that both the web page where an image appears on as well as the image file itself have to get indexed.

Site performance review

Last but not least, measure the old site’s page loading times and see how these compare with the new site’s when this becomes available on staging. At this stage, focus on the network-independent aspects of performance such as the use of external resources (images, JavaScript, and CSS), the HTML code, and the web server’s configuration. More information about how to do this is available further down.

Analytics tracking review

Make sure that analytics tracking is properly set up. This review should ideally be carried out by specialist analytics consultants who will look beyond the implementation of the tracking code. Make sure that Goals and Events are properly set up, e-commerce tracking is implemented, enhanced e-commerce tracking is enabled, etc. There’s nothing more frustrating than having no analytics data after your new site is launched.

Redirects testing

Testing the redirects before the new site goes live is critical and can save you a lot of trouble later. There are many ways to check the redirects on a staging/test server, but the bottom line is that you should not launch the new website without having tested the redirects.

Once the redirects become available on the staging/testing environment, crawl the entire list of redirects and check for the following issues:

  • Redirect loops (a URL that infinitely redirects to itself)
  • Redirects with a 4xx or 5xx server response.
  • Redirect chains (a URL that redirects to another URL, which in turn redirects to another URL, etc).
  • Canonical URLs that return a 4xx or 5xx server response.
  • Canonical loops (page A has a canonical pointing to page B, which has a canonical pointing to page A).
  • Canonical chains (a canonical that points to another page that has a canonical pointing to another page, etc).
  • Protocol/host inconsistencies e.g. URLs are redirected to both HTTP and HTTPS URLs or www and non-www URLs.
  • Leading/trailing whitespace characters. Use trim() in Excel to eliminate them.
  • Invalid characters in URLs.

Pro tip: Make sure one of the old site’s URLs redirects to the correct URL on the new site. At this stage, because the new site doesn’t exist yet, you can only test whether the redirect destination URL is the intended one, but it’s definitely worth it. The fact that a URL redirects does not mean it redirects to the right page.


Phase 4: Launch day activities

When the site is down…

While the new site is replacing the old one, chances are that the live site is going to be temporarily down. The downtime should be kept to a minimum, but while this happens the web server should respond to any URL request with a 503 (service unavailable) server response. This will tell search engine crawlers that the site is temporarily down for maintenance so they come back to crawl the site later.

If the site is down for too long without serving a 503 server response and search engines crawl the website, organic search visibility will be negatively affected and recovery won’t be instant once the site is back up. In addition, while the website is temporarily down it should also serve an informative holding page notifying users that the website is temporarily down for maintenance.

Technical spot checks

As soon as the new site has gone live, take a quick look at:

  1. The robots.txt file to make sure search engines are not blocked from crawling
  2. Top pages redirects (e.g. do requests for the old site’s top pages redirect correctly?)
  3. Top pages canonical tags
  4. Top pages server responses
  5. Noindex/nofollow directives, in case they are unintentional

The spot checks need to be carried out across both the mobile and desktop sites, unless the site is fully responsive.

Search Console actions

The following activities should take place as soon as the new website has gone live:

  1. Test & upload the XML sitemap(s)
  2. Set the Preferred location of the domain (www or non-www)
  3. Set the International targeting (if applicable)
  4. Configure the URL parameters to tackle early any potential duplicate content issues.
  5. Upload the Disavow file (if applicable)
  6. Use the Change of Address tool (if switching domains)

Pro tip: Use the “Fetch as Google” feature for each different type of page (e.g. the homepage, a category, a subcategory, a product page) to make sure Googlebot can render the pages without any issues. Review any reported blocked resources and do not forget to use Fetch and Render for desktop and mobile, especially if the mobile website isn’t responsive.

Blocked resources prevent Googlebot from rendering the content of the page


Phase 5: Post-launch review

Once the new site has gone live, a new round of in-depth checks should be carried out. These are largely the same ones as those mentioned in the “Phase 3: Pre-launch Testing” section.

However, the main difference during this phase is that you now have access to a lot more data and tools. Don’t underestimate the amount of effort you’ll need to put in during this phase, because any issues you encounter now directly impacts the site’s performance in the SERPs. On the other hand, the sooner an issue gets identified, the quicker it will get resolved.

In addition to repeating the same testing tasks that were outlined in the Phase 3 section, in certain areas things can be tested more thoroughly, accurately, and in greater detail. You can now take full advantage of the Search Console features.

Check crawl stats and server logs

Keep an eye on the crawl stats available in the Search Console, to make sure Google is crawling the new site’s pages. In general, when Googlebot comes across new pages it tends to accelerate the average number of pages it crawls per day. But if you can’t spot a spike around the time of the launch date, something may be negatively affecting Googlebot’s ability to crawl the site.

Crawl stats on Google’s Search Console

Reviewing the server log files is by far the most effective way to spot any crawl issues or inefficiencies. Tools like Botify and On Crawl can be extremely useful because they combine crawls with server log data and can highlight pages search engines do not crawl, pages that are not linked to internally (orphan pages), low-value pages that are heavily internally linked, and a lot more.

Review crawl errors regularly

Keep an eye on the reported crawl errors, ideally daily during the first few weeks. Downloading these errors daily, crawling the reported URLs, and taking the necessary actions (i.e. implement additional 301 redirects, fix soft 404 errors) will aid a quicker recovery. It’s highly unlikely you will need to redirect every single 404 that is reported, but you should add redirects for the most important ones.

Pro tip: In Google Analytics you can easily find out which are the most commonly requested 404 URLs and fix these first!

Other useful Search Console features

Other Search Console features worth checking include the Blocked Resources, Structured Data errors, Mobile Usability errors, HTML Improvements, and International Targeting (to check for hreflang reported errors).

Pro tip: Keep a close eye on the URL parameters in case they’re causing duplicate content issues. If this is the case, consider taking some urgent remedial action.

Measuring site speed

Once the new site is live, measure site speed to make sure the site’s pages are loading fast enough on both desktop and mobile devices. With site speed being a ranking signal across devices and becauseslow pages lose users and customers, comparing the new site’s speed with the old site’s is extremely important. If the new site’s page loading times appear to be higher you should take some immediate action, otherwise your site’s traffic and conversions will almost certainly take a hit.

Evaluating speed using Google’s tools

Two tools that can help with this are Google’s Lighthouse and Pagespeed Insights.

ThePagespeed Insights Tool measures page performance on both mobile and desktop devices and shows real-world page speed data based on user data Google collects from Chrome. It also checks to see if a page has applied common performance best practices and provides an optimization score. The tool includes the following main categories:

  • Speed score: Categorizes a page as Fast, Average, or Slow using two metrics: The First Contentful Paint (FCP) and DOM Content Loaded (DCL). A page is considered fast if both metrics are in the top one-third of their category.
  • Optimization score: Categorizes a page as Good, Medium, or Low based on performance headroom.
  • Page load distributions: Categorizes a page as Fast (fastest third), Average (middle third), or Slow (bottom third) by comparing against all FCP and DCL events in the Chrome User Experience Report.
  • Page stats: Can indicate if the page might be faster if the developer modifies the appearance and functionality of the page.
  • Optimization suggestions: A list of best practices that could be applied to a page.

Google’s PageSpeed Insights in action

Google’s Lighthouse is very handy for mobile performance, accessibility, and Progressive Web Apps audits. It provides various useful metrics that can be used to measure page performance on mobile devices, such as:

  • First Meaningful Paint that measures when the primary content of a page is visible.
  • Time to Interactive is the point at which the page is ready for a user to interact with.
  • Speed Index measures shows how quickly a page are visibly populated

Both tools provide recommendations to help improve any reported site performance issues.

Google’s Lighthouse in action

You can also use this Google tool to get a rough estimate on the percentage of users you may be losing from your mobile site’s pages due to slow page loading times.

The same tool also provides an industry comparison so you get an idea of how far you are from the top performing sites in your industry.

Measuring speed from real users

Once the site has gone live, you can start evaluating site speed based on the users visiting your site. If you have Google Analytics, you can easily compare the new site’s average load time with the previous one.

In addition, if you have access to a Real User Monitoring tool such as Pingdom, you can evaluate site speed based on the users visiting your website. The below map illustrates how different visitors experience very different loading times depending on their geographic location. In the below example, the page loading times appear to be satisfactory to visitors from the UK, US, and Germany, but to users residing in other countries they are much higher.


Phase 6: Measuring site migration performance

When to measure

Has the site migration been successful? This is the million-dollar question everyone involved would like to know the answer to as soon as the new site goes live. In reality, the longer you wait the clearer the answer becomes, as visibility during the first few weeks or even months can be very volatile depending on the size and authority of your site. For smaller sites, a 4–6 week period should be sufficient before comparing the new site’s visibility with the old site’s. For large websites you may have to wait for at least 2–3 months before measuring.

In addition, if the new site is significantly different from the previous one, users will need some time to get used to the new look and feel and acclimatize themselves with the new taxonomy, user journeys, etc. Such changes initially have a significant negative impact on the site’s conversion rate, which should improve after a few weeks as returning visitors are getting more and more used to the new site. In any case, making data-driven conclusions about the new site’s UX can be risky.

But these are just general rules of thumb and need to be taken into consideration along with other factors. For instance, if a few days or weeks after the new site launch significant additional changes were made (e.g. to address a technical issue), the migration’s evaluation should be pushed further back.

How to measure

Performance measurement is very important and even though business stakeholders would only be interested to hear about the revenue and traffic impact, there are a whole lot of other metrics you should pay attention to. For example, there can be several reasons for revenue going down following a site migration, including seasonal trends, lower brand interest, UX issues that have significantly lowered the site’s conversion rate, poor mobile performance, poor page loading times, etc. So, in addition to the organic traffic and revenue figures, also pay attention to the following:

  • Desktop & mobile visibility (from SearchMetrics, SEMrush, Sistrix)
  • Desktop and mobile rankings (from any reliable rank tracking tool)
  • User engagement (bounce rate, average time on page)
  • Sessions per page type (i.e. are the category pages driving as many sessions as before?)
  • Conversion rate per page type (i.e. are the product pages converting the same way as before?)
  • Conversion rate by device (i.e. has the desktop/mobile conversion rate increased/decreased since launching the new site?)

Reviewing the below could also be very handy, especially from a technical troubleshooting perspective:

  • Number of indexed pages (Search Console)
  • Submitted vs indexed pages in XML sitemaps (Search Console)
  • Pages receiving at least one visit (analytics)
  • Site speed (PageSpeed Insights, Lighthouse, Google Analytics)

It’s only after you’ve looked into all the above areas that you could safely conclude whether your migration has been successful or not.

Good luck and if you need any consultation or assistance with your site migration, please get in touch!


Site migration checklist

An up-to-date site migration checklist is available to download from our site. Please note that the checklist is regularly updated to include all critical areas for a successful site migration.


Appendix: Useful tools

Crawlers

  • Screaming Frog: The SEO Swiss army knife, ideal for crawling small- and medium-sized websites.
  • Sitebulb: Very intuitive crawler application with a neat user interface, nicely organized reports, and many useful data visualizations.
  • Deep Crawl: Cloud-based crawler with the ability to crawl staging sites and make crawl comparisons. Allows for comparisons between different crawls and copes well with large websites.
  • Botify: Another powerful cloud-based crawler supported by exceptional server log file analysis capabilities that can be very insightful in terms of understanding how search engines crawl the site.
  • On-Crawl: Crawler and server log analyzer for enterprise SEO audits with many handy features to identify crawl budget, content quality, and performance issues.

Handy Chrome add-ons

  • Web developer: A collection of developer tools including easy ways to enable/disable JavaScript, CSS, images, etc.
  • User agent switcher: Switch between different user agents including Googlebot, mobile, and other agents.
  • Ayima Redirect Path: A great header and redirect checker.
  • SEO Meta in 1 click: An on-page meta attributes, headers, and links inspector.
  • Scraper: An easy way to scrape website data into a spreadsheet.

Site monitoring tools

  • Uptime Robot: Free website uptime monitoring.
  • Robotto: Free robots.txt monitoring tool.
  • Pingdom tools: Monitors site uptime and page speed from real users (RUM service)
  • SEO Radar: Monitors all critical SEO elements and fires alerts when these change.

Site performance tools

  • PageSpeed Insights: Measures page performance for mobile and desktop devices. It checks to see if a page has applied common performance best practices and provides a score, which ranges from 0 to 100 points.
  • Lighthouse: Handy Chrome extension for performance, accessibility, Progressive Web Apps audits. Can also be run from the command line, or as a Node module.
  • Webpagetest.org: Very detailed page tests from various locations, connections, and devices, including detailed waterfall charts.

Structured data testing tools

Mobile testing tools

Backlink data sources

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Optimizing Email Capture: 9-point checklist to grow your email marketing list by minimizing the perceived cost of opting in

Only 17% of marketers say their email list is rapidly growing. One inhibitor may be your email opt-in form and landing page. Read now and download the free PDF checklist (no form fill required, instant download) to get your email marketing database growing more rapidly.

MarketingSherpa Blog

Related Articles

Posted in Latest NewsComments Off

How to Rank in 2018: The SEO Checklist – Whiteboard Friday

Posted by randfish

It’s hard enough as it is to explain to non-SEOs how to rank a webpage. In an increasingly complicated field, to do well you’ve got to have a good handle on a wide variety of detailed subjects. This edition of Whiteboard Friday covers a nine-point checklist of the major items you’ve got to cross off to rank in the new year — and maybe get some hints on how to explain it to others, too.

How to Rank in 2018: An SEO Checklist

Click on the whiteboard image above to open a high-resolution version in a new tab!


Video Transcription

Howdy, Moz fans, and welcome to a special New Year’s edition of Whiteboard Friday. This week we’re going to run through how to rank in 2018 in a brief checklist format.

So I know that many of you sometimes wonder, “Gosh, it feels overwhelming to try and explain to someone outside the SEO profession how to get a web page ranked.” Well, you know what? Let’s explore that a little bit this week on Whiteboard Friday. I sent out a tweet asking folks, “Send me a brief checklist in 280 characters or less,” and I got back some amazing responses. I have credited some folks here when they’ve contributed. There is a ton of detail to ranking in the SEO world, to try and rank in Google’s results. But when we pull out, when we go broad, I think that just a few items, in fact just the nine we’ve got here can basically take you through the majority of what’s required to rank in the year ahead. So let’s dive into that.

I. Crawlable, accessible URL whose content Google can easily crawl and parse.

So we want Googlebot’s spiders to be able to come to this page, to understand the content that’s on there in a text readable format, to understand images and visuals or video or embeds or anything else that you’ve got on the page in a way that they are going to be able to put into their web index. That is crucial. Without it, none of the rest of this stuff even matters.

II. Keyword research

We need to know and to uncover the words and phrases that searchers are actually using to solve or to get answers to the problem that they are having in your world. Those should be problems that your organization, your website is actually working to solve, that your content will help them to solve.

What you want here is a primary keyword and hopefully a set of related secondary keywords that share the searcher’s intent. So the intent behind of all of these terms and phrases should be the same so that the same content can serve it. When you do that, we now have a primary and a secondary set of keywords that we can target in our optimization efforts.

III. Investigate the SERP to find what Google believes to be relevant to the keywords’s searches

I want you to do some SERP investigation, meaning perform a search query in Google, see what comes back to you, and then figure out from there what Google believes to be relevant to the keywords searches. What does Google think is the content that will answer this searcher’s query? You’re trying to figure out intent, the type of content that’s required, and whatever missing pieces might be there. If you can find holes where, hey, no one is serving this, but I know that people want the answer to it, you might be able to fill that gap and take over that ranking position. Thanks to Gaetano, @gaetano_nyc, for the great suggestion on this one.

IV. Have the most credible, amplifiable person or team available create content that’s going to serve the searcher’s goal and solve their task better than anyone else on page one.

There are three elements here. First, we want an actually credible, worthy of amplification person or persons to create the content. Why is that? Well, because if we do that, we make amplification, we make link building, we make social sharing way more likely to happen, and our content becomes more credible, both in the eyes of searchers and visitors as well as in Google’s eyes too. So to the degree that that is possible, I would certainly urge you to do it.

Next, we’re trying to serve the searcher’s goal and solve their task, and we want to do that better than anyone else does it on page one, because if we don’t, even if we’ve optimized a lot of these other things, over time Google will realize, you know what? Searchers are frustrated with your result compared to other results, and they’re going to rank those other people higher. Huge credit to Dan Kern, @kernmedia on Twitter, for the great suggestion on this one.

V. Craft a compelling title, meta description.

Yes, Google still does use the meta description quite frequently. I know it seems like sometimes they don’t. But, in fact, there’s a high percent of the time when the actual meta description from the page is used. There’s an even higher percentage where the title is used. The URL, while Google sometimes truncates those, also used in the snippet as well as other elements. We’ll talk about schema and other kinds of markup later on. But the snippet is something that is crucial to your SEO efforts, because that determines how it displays in the search result. How Google displays your result determines whether people want to click on your listing or someone else’s. The snippet is your opportunity to say, “Come click me instead of those other guys.” If you can optimize this, both from a keyword perspective using the words and phrases that people want, as well as from a relevancy and a pure drawing the click perspective, you can really win.

VI. Intelligently employ those primary, secondary, and related keywords

Related keywords meaning those that are semantically connected that Google is going to view as critical to proving to them that your content is relevant to the searcher’s query — in the page’s text content. Why am I saying text content here? Because if you put it purely in visuals or in video or some other embeddable format that Google can’t necessarily easily parse out, eeh, they might not count it. They might not treat it as that’s actually content on the page, and you need to prove to Google that you have the relevant keywords on the page.

VII. Where relevant and possible, use rich snippets and schema markup to enhance the potential visibility that you’re going to get.

This is not possible for everyone. But in some cases, in the case that you’re getting into Google news, or in the case that you’re in the recipe world and you can get visuals and images, or in the case where you have a featured snippet opportunity and you can get the visual for that featured snippet along with that credit, or in the case where you can get rich snippets around travel or around flights, other verticals that schema is supporting right now, well, that’s great. You should take advantage of those opportunities.

VIII. Optimize the page to load fast, as fast as possible and look great.

I mean look great from a visual, UI perspective and look great from a user experience perspective, letting someone go all the way through and accomplish their task in an easy, fulfilling way on every device, at every speed, and make it secure too. Security critically important. HTTPS is not the only thing, but it is a big part of what Google cares about right now, and HTTPS was a big focus in 2016 and 2017. It will certainly continue to be a focus for Google in 2018.

IX. You need to have a great answer to the question: Who will help amplify this and why?

When you have that great answer, I mean a specific list of people and publications who are going to help you amplify it, you’ve got to execute to earn solid links and mentions and word of mouth across the web and across social media so that your content can be seen by Google’s crawlers and by human beings, by people as highly relevant and high quality.

You do all this stuff, you’re going to rank very well in 2018. Look forward to your comments, your additions, your contributions, and feel free to look through the tweet thread as well.

Thanks to all of you who contributed via Twitter and to all of you who followed us here at Moz and Whiteboard Friday in 2017. We hope you have a great year ahead. Thanks for watching. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

The SEO Competitive Analysis Checklist

Posted by zeehj

The SEO case for competitive analyses

“We need more links!” “I read that user experience (UX) matters more than everything else in SEO, so we should focus solely on UX split tests.” “We just need more keywords on these pages.”

If you dropped a quarter on the sidewalk, but had no light to look for it, would you walk to the next block with a street light to retrieve it? The obvious answer is no, yet many marketers get tunnel vision when it comes to where their efforts should be focused.

1942 June 3, Florence Morning News, Mutt and Jeff Comic Strip, Page 7, Florence, South Carolina. (NewspaperArchive)

Which is why I’m sharing a checklist with you today that will allow you to compare your website to your search competitors, and identify your site’s strengths, weaknesses, and potential opportunities based on ranking factors we know are important.

If you’re unconvinced that good SEO is really just digital marketing, I’ll let AJ Kohn persuade you otherwise. As any good SEO (or even keyword research newbie) knows, it’s crucial to understand the effort involved in ranking for a specific term before you begin optimizing for it.

It’s easy to get frustrated when stakeholders ask how to rank for a specific term, and solely focus on content to create, or on-page optimizations they can make. Why? Because we’ve known for a while that there are myriad factors that play into search engine rank. Depending on the competitive search landscape, there may not be any amount of “optimizing” that you can do in order to rank for a specific term.

The story that I’ve been able to tell my clients is one of hidden opportunity, but the only way to expose these undiscovered gems is to broaden your SEO perspective beyond search engine results page (SERP) position and best practices. And the place to begin is with a competitive analysis.

Competitive analyses help you evaluate your competition’s strategies to determine their strengths and weaknesses relative to your brand. When it comes to digital marketing and SEO, however, there are so many ranking factors and best practices to consider that can be hard to know where to begin. Which is why my colleague, Ben Estes, created a competitive analysis checklist (not dissimilar to his wildly popular technical audit checklist) that I’ve souped up for the Moz community.

This checklist is broken out into sections that reflect key elements from our Balanced Digital Scorecard. As previously mentioned, this checklist is to help you identify opportunities (and possibly areas not worth your time and budget). But this competitive analysis is not prescriptive in and of itself. It should be used as its name suggests: to analyze what your competition’s “edge” is.

Methodology

Choosing competitors

Before you begin, you’ll need to identify six brands to compare your website against. These should be your search competitors (who else is ranking for terms that you’re ranking for, or would like to rank for?) in addition to a business competitor (or two). Don’t know who your search competition is? You can use SEMRush and Searchmetrics to identify them, and if you want to be extra thorough you can use this Moz post as a guide.

Sample sets of pages

For each site, you’ll need to select five URLs to serve as your sample set. These are the pages you will review and evaluate against the competitive analysis items. When selecting a sample set, I always include:

  • The brand’s homepage,
  • Two “product” pages (or an equivalent),
  • One to two “browse” pages, and
  • A page that serves as a hub for news/informative content.

Make sure each site has equivalent pages to each other, for a fair comparison.

Scoring

The scoring options for each checklist item range from zero to four, and are determined relative to each competitor’s performance. This means that a score of two serves as the average performance in that category.

For example, if each sample set has one unique H1 tag per page, then each competitor would get a score of two for H1s appear technically optimized. However if a site breaks one (or more) of the below requirements, then it should receive a score of zero or one:

  1. One or more pages within sample set contains more than one H1 tag on it, and/or
  2. H1 tags are duplicated across a brand’s sample set of pages.

Checklist

Platform (technical optimization)

Title tags appear technically optimized. This measurement should be as quantitative as possible, and refer only to technical SEO rather than its written quality. Evaluate the sampled pages based on:

  • Only one title tag per page,
  • The title tag being correctly placed within the head tags of the page, and
  • Few to no extraneous tags within the title (e.g. ideally no inline CSS, and few to no span tags).

H1s appear technically optimized. Like with the title tags, this is another quantitative measure: make sure the H1 tags on your sample pages are sound by technical SEO standards (and not based on writing quality). You should look for:

  • Only one H1 tag per page, and
  • Few to no extraneous tags within the tag (e.g. ideally no inline CSS, and few to no span tags).

Internal linking allows indexation of content. Observe the internal outlinks on your sample pages, apart from the sites’ navigation and footer links. This line item serves to check that the domains are consolidating their crawl budgets by linking to discoverable, indexable content on their websites. Here is an easy-to-use Chrome plugin from fellow Distiller Dom Woodman to see whether the pages are indexable.

To get a score of “2” or more, your sample pages should link to pages that:

  • Produce 200 status codes (for all, or nearly all), and
  • Have no more than ~300 outlinks per page (including the navigation and footer links).

Schema markup present. This is an easy check. Using Google’s Structured Data Testing Tool, look to see whether these pages have any schema markup implemented, and if so, whether it is correct. In order to receive a score of “2” here, your sampled pages need:

  • To have schema markup present, and
  • Be error-free.

Quality of schema is definitely important, and can make the difference of a brand receiving a score of “3” or “4.” Elements to keep in mind are: Organization or Website markup on every sample page, customized markup like BlogPosting or Article on editorial content, and Product markup on product pages.

There is a “home” for newly published content. A hub for new content can be the site’s blog, or a news section. For instance, Distilled’s “home for newly published content” is the Resources section. While this line item may seem like a binary (score of “0” if you don’t have a dedicated section for new content, or score of “2” if you do), there are nuances that can bring each brand’s score up or down. For example:

  • Is the home for new content unclear, or difficult to find? Approach this exercise as though you are a new visitor to the site.
  • Does there appear to be more than one “home” of new content?
  • If there is a content hub, is it apparent that this is for newly published pieces?

We’re not obviously messing up technical SEO. This is partly comprised of each brand’s performance leading up to this line item (mainly Title tags appear technically optimized through Schema markup present).

It would be unreasonable to run a full technical audit of each competitor, but take into account your own site’s technical SEO performance if you know there are outstanding technical issues to be addressed. In addition to the previous checklist items, I also like to use these Chrome extensions from Ayima: Page Insights and Redirect Path. These can provide quick checks for common technical SEO errors.

Content

Title tags appear optimized (editorially). Here is where we can add more context to the overall quality of the sample pages’ titles. Even if they are technically optimized, the titles may not be optimized for distinctiveness or written quality. Note that we are not evaluating keyword targeting, but rather a holistic (and broad) evaluation of how each competitor’s site approaches SEO factors. You should evaluate each page’s titles based on the following:

H1s appear optimized (editorially). The same rules that apply to titles for editorial quality also apply to H1 tags. Review each sampled page’s H1 for:

  • A unique H1 tag per page (language in H1 tags does not repeat),
  • H1 tags that are discrete from their page’s title, and
  • H1s represent the content on the page.

Internal linking supports organic content. Here you must look for internal outlinks outside of each site’s header and footer links. This evaluation is not based on the number of unique internal links on each sampled page, but rather on the quality of the pages to which our brands are linking.

While “organic content” is a broad term (and invariably differs by business vertical), here are some guidelines:

  • Look for links to informative pages like tutorials, guides, research, or even think pieces.
    • The blog posts on Moz (including this very one) are good examples of organic content.
  • Internal links should naturally continue the user’s journey, so look for topical progression in each site’s internal links.
  • Links to service pages, products, RSVP, or email subscription forms are not examples of organic content.
  • Make sure the internal links vary. If sampled pages are repeatedly linking to the same resources, this will only benefit those few pages.
    • This doesn’t mean that you should penalize a brand for linking to the same resource two, three, or even four times over. Use your best judgment when observing the sampled pages’ linking strategies.

Appropriate informational content. You can use the found “organic content” from your sample sets (and the samples themselves) to review whether the site is producing appropriate informational content.

What does that mean, exactly?

  • The content produced obviously fits within the site’s business vertical, area of expertise, or cause.
    • Example: Moz’s SEO and Inbound Marketing Blog is an appropriate fit for an SEO company.
  • The content on the site isn’t overly self-promotional, resulting in an average user not trusting this domain to produce unbiased information.
    • Example: If Distilled produced a list of “Best Digital Marketing Agencies,” it’s highly unlikely that users would find it trustworthy given our inherent bias!

Quality of content. Highly subjective, yes, but remember: you’re comparing brands against each other. Here’s what you need to evaluate here:

  • Are “informative” pages discussing complex topics under 400 words?
  • Do you want to read the content?
  • Largely, do the pages seem well-written and full of valuable information?
    • Conversely, are the sites littered with “listicles,” or full of generic info you can find in millions of other places online?

Quality of images/video. Also highly subjective (but again, compare your site to your competitors, and be brutally honest). Judge each site’s media items based on:

  • Resolution (do the images or videos appear to be high quality? Grainy?),
  • Whether they are unique (do the images or videos appear to be from stock resources?),
  • Whether the photos or videos are repeated on multiple sample pages.

Audience (engagement and sharing of content)

Number of linking root domains. This factor is exclusively based on the total number of dofollow linking root domains (LRDs) to each domain (not total backlinks).

You can pull this number from Moz’s Open Site Explorer (OSE) or from Ahrefs. Since this measurement is only for the total number of LRDs to competitor, you don’t need to graph them. However, you will have an opportunity to display the sheer quantity of links by their domain authority in the next checklist item.

Quality of linking root domains. Here is where we get to the quality of each site’s LRDs. Using the same LRD data you exported from either Moz’s OSE or Ahrefs, you can bucket each brand’s LRDs by domain authority and count the total LRDs by DA. Log these into this third sheet, and you’ll have a graph that illustrates their overall LRD quality (and will help you grade each domain).

Other people talk about our content. I like to use BuzzSumo for this checklist item. BuzzSumo allows you to see what sites have written about a particular topic or company. You can even refine your search to include or exclude certain terms as necessary.

You’ll need to set a timeframe to collect this information. Set this to the past year to account for seasonality.

Actively promoting content. Using BuzzSumo again, you can alter your search to find how many of each domain’s URLs have been shared on social networks. While this isn’t an explicit ranking factor, strong social media marketing is correlated with good SEO. Keep the timeframe to one year, same as above.

Creating content explicitly for organic acquisition. This line item may seem similar to Appropriate informational content, but its purpose is to examine whether the competitors create pages to target keywords users are searching for.

Plug your the same URLs from your found “organic content” into SEMRush, and note whether they are ranking for non-branded keywords. You can grade the competitors on whether (and how many of) the sampled pages are ranking for any non-branded terms, and weight them based on their relative rank positions.

Conversion

You should treat this section as a UX exercise. Visit each competitor’s sampled URLs as though they are your landing page from search. Is it clear what the calls to action are? What is the next logical step in your user journey? Does it feel like you’re getting the right information, in the right order as you click through?

Clear CTAs on site. Of your sample pages, examine what the calls to action (CTAs) are. This is largely UX-based, so use your best judgment when evaluating whether they seem easy to understand. For inspiration, take a look at these examples of CTAs.

Conversions appropriate to several funnel steps. This checklist item asks you to determine whether the funnel steps towards conversion feel like the correct “next step” from the user’s standpoint.

Even if you are not a UX specialist, you can assess each site as though you are a first time user. Document areas on the pages where you feel frustrated, confused, or not. User behavior is a ranking signal, so while this is a qualitative measurement, it can help you understand the UX for each site.

CTAs match user intent inferred from content. Here is where you’ll evaluate whether the CTAs match the user intent from the content as well as the CTA language. For instance, if a CTA prompts a user to click “for more information,” and takes them to a subscription page, the visitor will most likely be confused or irritated (and, in reality, will probably leave the site).


This analysis should help you holistically identify areas of opportunity available in your search landscape, without having to guess which “best practice” you should test next. Once you’ve started this competitive analysis, trends among the competition will emerge, and expose niches where your site can improve and potentially outpace your competition.

Kick off your own SEO competitive analysis and comment below on how it goes! If this process is your jam, or you’d like to argue with it, come see me speak about these competitive analyses and the campaigns they’ve inspired at SearchLove London. Bonus? If you use that link, you’ll get £50 off your tickets.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Optimizing for Mobile Search: A checklist to improve local SEO

Mobile devices now account for nearly 60 percent of all searches. Are your local sites and landing pages in the best position to show up in the SERPs and engage mobile consumers? Join us for an in-depth look at how to optimize your location-based marketing strategy for the mobile consumer. We’ll…



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

Your Step-by-Step Email Marketing Strategy Guide [Free Checklist]

free! email marketing strategy guide

Perhaps you already feel like you have a good handle on the little details of email marketing, like writing subject lines, creating opt-in forms for your site, and setting up your welcome message.

But maybe what you really need is a comprehensive road map for email marketing that presents a smart email marketing strategy as a series of manageable steps.

If that’s the case, I’ve got great news! To wrap up our 10-part email marketing series, that’s exactly what you’re going to get.

This step-by-step email marketing strategy guide summarizes how to create your list, get subscribers, build solid relationships, and make well-timed offers.

We’ve also outlined these steps in a handy PDF checklist that you can download for free, so keep reading …

Step #1: Sign up for an account with a reputable email service provider

To make sure your email marketing strategy stands on a firm, ethical foundation, you’ll need a reputable email service provider to help you collect, track, and get in touch with your contacts.

An email service provider (ESP) will help you:

  • Manage your opt-in and opt-out process
  • Stay CAN-SPAM compliant
  • Provide mobile-friendly email templates for your messages
  • Track and report your open rates, click-through rates, and unsubscribe information

Shop carefully when you’re choosing an ESP, because changing service providers down the road can be tedious. When you’re selecting an ESP, consider your budget, the level of tech support you’re going to receive, and the service’s user interface.

Step #2: Add smart opt-in forms to your site and give away an enticing incentive

Once you’ve got an email service provider, it’s time to invite people to join your list.

More people will sign up for your list if you give away a useful, enticing incentive in exchange for a subscriber’s email address. It should be simple and highly useful.

Your goal is to create content so compelling that your potential subscribers don’t hesitate to hand over their email addresses to get it.

You may need to try multiple incentives until you find one that strikes the perfect chord with your visitors, so create your first one quickly and test it out with your audience right away.

Once you’ve got your incentive ready to go, add attention-getting opt-in forms to your site. Remember to think beyond simple sidebar opt-in forms and consider feature boxes, pop-overs, and blog post footer opt-in forms.

Step #3: Actively build your email list

Now that your incentive is ready and your visitors have a way to join your community, it’s time to actively build your email list.

You can try these classic, time-tested list-building techniques:

Step #4: Send regular, useful content to your list

As soon as you gain subscribers, begin sending them high-quality content on a regular basis. You can send newsletters, content notification emails, or both.

Aim to regularly send content to your subscribers, provide tons of value, and train your community members to consistently click on the links in your emails.

You can also set up an autoresponder series that sends carefully selected content to your list. To start, you can compose three emails for your autoresponder series and add more over time, as needed.

Follow CAN-SPAM regulations when you email your list and review this top-to-bottom checklist before you click “send.”

Step #5: Present relevant offers to your list on a regular basis

Once you’ve built solid relationships with your subscribers through free content, start sending them relevant offers via email.

You can begin adding relevant offers to your newsletters or content notification emails, or add sales-related emails to your autoresponder sequence(s).

When sending emails that feature offers, include a strong call-to-action link that prompts your subscriber to take the next step — whether that’s setting up a consultation, buying a product, or checking out your latest online program.

Call-to-action links should be easy to click on and work best when they stand out visually — particularly on mobile devices.

Continually repeat Steps 3 – 5

Your email marketing strategy is an evolving process — it’s typically not a “set it and forget it” task.

You need to continually build your list, publish valuable content, and send relevant offers, so it’s a good idea to repeat Steps 3 through 5.

It’s also a good idea to regularly evaluate your email marketing strategy and the accuracy of the messages you send, as well as make sure your emails are sending as expected. Review your welcome message, offers, and any automated operations (like autoresponders or RSS feeds that send new content to your list automatically).

Craft an effective email marketing strategy for your business

copyblogger-your-step-by-step-email-marketing-strategy-guideYour email subscribers are some of your most valuable business assets — and building and strengthening your list should be a top priority for your business.

When you follow this five-step process, you’ll be on your way to attracting a large list of loyal subscribers, building solid relationships with those subscribers, and using email as one of your most successful sales tools.

Don’t forget to download the PDF checklist of this email marketing strategy guide, along with a glossary of common email marketing terms (61 KB).

Catch up on all the posts in our email marketing series

This is the last post in our current email marketing series, so now I’m officially handing the reins over to you.

You can use this series of posts to craft a smart email strategy that fits with the rest of your content marketing plans and goals.

Make sure to stay in touch, and fill me in on your email marketing success story!

The post Your Step-by-Step Email Marketing Strategy Guide [Free Checklist] appeared first on Copyblogger.


Copyblogger

Posted in Latest NewsComments Off

Advert