Tag Archive | "Behind"

James Wong Howe Google doodle honors influential cinematographer behind more than 100 US films

Today marks the 1934 release date of ‘The Thin Man,’ one of Howe’s most notable films.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

Abraham Ortelius Google doodle honors cartographer behind first modern day atlas

Published on this date in 1570, Ortelius’ “Theatrum Orbis Terrarum” included a collection of maps from scientists, geographers and cartographers.
Please visit Search Engine Land for the full article.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

Google Shares Details About the Technology Behind Googlebot

Posted by goralewicz

Crawling and indexing has been a hot topic over the last few years. As soon as Google launched Google Panda, people rushed to their server logs and crawling stats and began fixing their index bloat. All those problems didn’t exist in the “SEO = backlinks” era from a few years ago. With this exponential growth of technical SEO, we need to get more and more technical. That being said, we still don’t know how exactly Google crawls our websites. Many SEOs still can’t tell the difference between crawling and indexing.

The biggest problem, though, is that when we want to troubleshoot indexing problems, the only tool in our arsenal is Google Search Console and the Fetch and Render tool. Once your website includes more than HTML and CSS, there’s a lot of guesswork into how your content will be indexed by Google. This approach is risky, expensive, and can fail multiple times. Even when you discover the pieces of your website that weren’t indexed properly, it’s extremely difficult to get to the bottom of the problem and find the fragments of code responsible for the indexing problems.

Fortunately, this is about to change. Recently, Ilya Grigorik from Google shared one of the most valuable insights into how crawlers work:

Interestingly, this tweet didn’t get nearly as much attention as I would expect.

So what does Ilya’s revelation in this tweet mean for SEOs?

Knowing that Chrome 41 is the technology behind the Web Rendering Service is a game-changer. Before this announcement, our only solution was to use Fetch and Render in Google Search Console to see our page rendered by the Website Rendering Service (WRS). This means we can troubleshoot technical problems that would otherwise have required experimenting and creating staging environments. Now, all you need to do is download and install Chrome 41 to see how your website loads in the browser. That’s it.

You can check the features and capabilities that Chrome 41 supports by visiting Caniuse.com or Chromestatus.com (Googlebot should support similar features). These two websites make a developer’s life much easier.

Even though we don’t know exactly which version Ilya had in mind, we can find Chrome’s version used by the WRS by looking at the server logs. It’s Chrome 41.0.2272.118.

It will be updated sometime in the future

Chrome 41 was created two years ago (in 2015), so it’s far removed from the current version of the browser. However, as Ilya Grigorik said, an update is coming:

I was lucky enough to get Ilya Grigorik to read this article before it was published, and he provided a ton of valuable feedback on this topic. He mentioned that they are hoping to have the WRS updated by 2018. Fingers crossed!

Google uses Chrome 41 for rendering. What does that mean?

We now have some interesting information about how Google renders websites. But what does that mean, practically, for site developers and their clients? Does this mean we can now ignore server-side rendering and deploy client-rendered, JavaScript-rich websites?

Not so fast. Here is what Ilya Grigorik had to say in response to this question:

We now know WRS’ capabilities for rendering JavaScript and how to debug them. However, remember that not all crawlers support Javascript crawling, etc. Also, as of today, JavaScript crawling is only supported by Google and Ask (Ask is most likely powered by Google). Even if you don’t care about social media or search engines other than Google, one more thing to remember is that even with Chrome 41, not all JavaScript frameworks can be indexed by Google (read more about JavaScript frameworks crawling and indexing). This lets us troubleshoot and better diagnose problems.

For example, we ran into a problem with indexing Polymer’s generated content. Ilya Grigorik provided insight on how to deal with such issues in our experiment (below). We used this feedback to make http://jsseo.expert/polymer/ indexable — it now works fine in Chrome 41 and indexes properly.

“If you look at the raised Javascript error under the hood, the test page is throwing an error due to unsupported (in M41) ES6 syntax. You can test this yourself in M41, or use the debug snippet we provided in the blog post to log the error into the DOM to see it.”

I believe this is another powerful tool for web developers willing to make their JavaScript websites indexable.

If you want to see a live example, open http://jsseo.expert/angular2-bug/ in Chrome 41 and use the Chrome Developer Tools to play with JavaScript troubleshooting (screenshot below):

Fetch and Render is the Chrome v. 41 preview

There’s another interesting thing about Chrome 41. Google Search Console’s Fetch and Render tool is simply the Chrome 41 preview. The righthand-side view (“This is how a visitor to your website would have seen the page”) is generated by the Google Search Console bot, which is… Chrome 41.0.2272.118 (see screenshot below).

Zoom in here

There’s evidence that both Googlebot and Google Search Console Bot render pages using Chrome 41. Still, we don’t exactly know what the differences between them are. One noticeable difference is that the Google Search Console bot doesn’t respect the robots.txt file. There may be more, but for the time being, we’re not able to point them out.

Chrome 41 vs Fetch as Google: A word of caution

Chrome 41 is a great tool for debugging Googlebot. However, sometimes (not often) there’s a situation in which Chrome 41 renders a page properly, but the screenshots from Google Fetch and Render suggest that Google can’t handle the page. It could be caused by CSS animations and transitions, Googlebot timeouts, or the usage of features that Googlebot doesn’t support. Let me show you an example.

Chrome 41 preview:

Image blurred for privacy

The above page has quite a lot of content and images, but it looks completely different in Google Search Console.

Google Search Console preview for the same URL:

As you can see, Google Search Console’s preview of this URL is completely different than what you saw on the previous screenshot (Chrome 41). All the content is gone and all we can see is the search bar.

From what we noticed, Google Search Console renders CSS a little bit different than Chrome 41. This doesn’t happen often, but as with most tools, we need to double check whenever possible.

This leads us to a question…

What features are supported by Googlebot and WRS?

According to the Rendering on Google Search guide:

  • Googlebot doesn’t support IndexedDB, WebSQL, and WebGL.
  • HTTP cookies and local storage, as well as session storage, are cleared between page loads.
  • All features requiring user permissions (like Notifications API, clipboard, push, device-info) are disabled.
  • Google can’t index 3D and VR content.
  • Googlebot only supports HTTP/1.1 crawling.

The last point is really interesting. Despite statements from Google over the last 2 years, Google still only crawls using HTTP/1.1.

No HTTP/2 support (still)

We’ve mostly been covering how Googlebot uses Chrome, but there’s another recent discovery to keep in mind.

There is still no support for HTTP/2 for Googlebot.

Since it’s now clear that Googlebot doesn’t support HTTP/2, this means that if your website supports HTTP/2, you can’t drop HTTP 1.1 optimization. Googlebot can crawl only using HTTP/1.1.

There were several announcements recently regarding Google’s HTTP/2 support. To read more about it, check out my HTTP/2 experiment here on the Moz Blog.

Via https://developers.google.com/search/docs/guides/r…

Googlebot’s future

Rumor has it that Chrome 59’s headless mode was created for Googlebot, or at least that it was discussed during the design process. It’s hard to say if any of this chatter is true, but if it is, it means that to some extent, Googlebot will “see” the website in the same way as regular Internet users.

This would definitely make everything simpler for developers who wouldn’t have to worry about Googlebot’s ability to crawl even the most complex websites.

Chrome 41 vs. Googlebot’s crawling efficiency

Chrome 41 is a powerful tool for debugging JavaScript crawling and indexing. However, it’s crucial not to jump on the hype train here and start launching websites that “pass the Chrome 41 test.”

Even if Googlebot can “see” our website, there are many other factors that will affect your site’s crawling efficiency. As an example, we already have proof showing that Googlebot can crawl and index JavaScript and many JavaScript frameworks. It doesn’t mean that JavaScript is great for SEO. I gathered significant evidence showing that JavaScript pages aren’t crawled even half as effectively as HTML-based pages.

In summary

Ilya Grigorik’s tweet sheds more light on how Google crawls pages and, thanks to that, we don’t have to build experiments for every feature we’re testing — we can use Chrome 41 for debugging instead. This simple step will definitely save a lot of websites from indexing problems, like when Hulu.com’s JavaScript SEO backfired.

It’s safe to assume that Chrome 41 will now be a part of every SEO’s toolset.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Understanding the Brain Science Behind Effective Persuasion, with Roger Dooley

un-roger-dooley

The ancient Greeks — notably Aristotle — used anecdotal observation to nail much of what we know about persuasion. The fundamentals of the art haven’t changed much in 2,300 years, because human nature hasn’t changed, even as the context in which we operate has changed dramatically.

In the 20th century, social psychology took the ancient principles of rhetoric and proved them correct in controlled experiments. The work of Dr.Robert Cialdini in particular helped prove the power of authority, social proof, scarcity, and other fundamental aspects of influence.

Now, we have neuroscience. Brain imaging allows us to go beyond observing human response alone, and see which parts of the brain “light up” while responding the way we do in certain situations.

Roger Dooley is the author of Brainfluence: 100 Ways to Persuade and Convince Consumers with Neuromarketing. Today he joins us to reveal the keys to understanding what makes people choose and behave the way we do, along with some cutting-edge thoughts about “tribal” marketing.

Listen to this Episode Now

The post Understanding the Brain Science Behind Effective Persuasion, with Roger Dooley appeared first on Copyblogger.


Copyblogger

Posted in Latest NewsComments Off

Content Gating: When, Whether, and How to Put Your Content Behind an Email/Form Capture – Whiteboard Friday

Posted by randfish

Have you ever considered gating your content to get leads? Whether you choose to have open-access content or gate it to gather information, there are benefits and drawbacks you should be aware of. In today’s Whiteboard Friday, Rand weighs the pros and cons of each approach and shares some tips for improving your process, regardless of whichever route you go.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about content gating.

This is something that a lot of content marketers use, particularly those who are interested in generating leads, individuals that their salespeople or sales teams or outreach folks or business development folks can reach out to specifically to sell a product or start a conversation. Many content marketers and SEOs use this type of content as a lure to essentially attract someone, who then fills in form fields to give enough information so that the sales pipeline gets filled or the leads pipeline gets filled, and then the person gets the content.

As opposed to the classic model that we’re used to in a more open content marketing and open SEO world of, “Let me give you something and then hopefully get something in return,” it’s, “You give me something and I will give you this thing in return.” This is a very, very popular tactic. You might be familiar with Moz and know that my general bias and Moz’s general bias is against content gating. We sort of have a philosophical bias against it, with the exception of, on the Moz Local side, some enterprise stuff, that that marketing team may be doing, may in the future include some gating. But generally, at Moz, we’re sort of against it.

However, I don’t want to be too biased. I recognize that it does have benefits, and I want to explain some of those benefits and drawbacks so that you can make your own choices of how to do it. Then we’re going to rock through some recommendations, some tactical tips that I’ve got for you around how you can improve how you do it, no matter whether you are doing open content or full content gating.

Benefits of gating content

The two. This is the gated idea. So you get this free report on the state of artificial intelligence in 2016. But first, before you get that report, you fill in all these fields: name, email, role, company website, Twitter, LinkedIn, what is your budget for AI in 2017 and you fill in a number. I’m not kidding here. Many of these reports require these and many other fields to be filled in. I have filled in personally several that are intense in order to get a report back. So it’s even worked on me at times.

The opposite of that, of course, would be the report is completely available. You get to the webpage, and it’s just here’s the state of AI, the different sections, and you get your graphs and your charts, and all your data is right in there. Fantastic, completely free access. You’ve had to give nothing, just visit the website.

The benefits of gating are you actually get:

  • More information about who specifically accessed the report. Granted, some of this information could be faked. There are people who work around that by verifying and validating at least the email address or those kinds of things.
  • Those who expend the energy to invest in the report may view the data or the report itself as more valuable, more useful, more trustworthy, to carry generally greater value. This is sort of an element of human psychology, where we value things that we’ve had to work harder to get.
  • Sales outreach to the folks who did access it may be much easier and much more effective because you obviously have a lot of information about those people, versus if you collected only an email or no information at all, in which case would be close to impossible.

Drawbacks of gating content

Let’s walk through the drawbacks of gating, some things that you can’t do:

  • Smaller audience potential. It is much harder to get this in front of tons of people. Maybe not this page specifically, but certainly it’s hard to get amplification of this, and it’s very hard to get an audience, get many, many people to fill out all those form fields.
  • Harder to earn links and amplification. People generally do not link to content like this. By the way, the people who do link to and socially amplify stuff like this usually do it with the actual file. So what they’ll do is they’ll look for State of AI 2016, filetype:pdf, site:yourdomain.com, and then they’ll find the file behind whatever you’ve got. I know there are some ways to gate that even such that no one can access it, but it’s a real pain.
  • It also is true that some folks this leaves a very bad taste in their mouth. They have a negative brand perception around it. Now negative brand perception could be around having to fill this out. It could be around whether the content was worth it after they filled this out. It could be about the outreach that happens to them after they filled this out and their interest in getting this data was not to start a sales conversation. You also lose a bunch of your SEO benefits, because you don’t get the links, you don’t get the engagement. If you do rank for this, it tends to be the case that your bounce rate is very high, much higher than other people who might rank for things like the state of AI 2016. So you just struggle.

Benefits of open access

What are the benefits and drawbacks of open access? Well, benefits, pretty obvious:

  • Greater ability to drive traffic from all channels, of course — social, search, word of mouth, email, whatever it is. You can drive a lot more people here.
  • There’s a larger future audience for retargeting and remarketing. So the people who do reach the report itself in here, you certainly have an opportunity. You could retarget and remarket to them. You could also reach out to them directly. Maybe you could retarget and remarket to people who’ve reached this page but didn’t fill in any information. But these folks here are a much greater audience potential for those retargeting and remarketing efforts. Larry Kim from WordStream has shown some awesome examples. Marty Weintraub from Aimclear also has shown some awesome examples of how you can do that retargeting and remarketing to folks who’ve reached content.
  • SEO benefits via links that point to these pages, via engagement metrics, via their ranking ability, etc. etc. You’re going to do much better with this. We do much better with the Beginner’s Guide to SEO on Moz than we would if it were gated and you had to give us your information first, of course.

Overall, if what you are trying to achieve is, rather than leads, simply to get your message to the greatest number of people, this is a far, far better effort. This is likely to reach a much bigger audience, and that message will therefore reach that much larger audience.

Drawbacks of open access

There are some drawbacks for this open access model. It’s not without them.

  • It might be hard or even totally impossible to convert many or most of the visits that come to open access content into leads or potential leads. It’s just the case that those people are going to consume that content, but they may never give you information that will allow you to follow up or reach out to them.
  • Information about the most valuable and important visitors, the ones who would have filled this thing out and would have been great leads is lost forever when you open up the content. You just can’t capture those folks. You’re not going to get their information.

So these two are what drive many folks up to this model and certainly the benefits of the gated content model as well.

Recommendations

So, my recommendations. It’s a fairly simple equation. I urge you to think about this equation from as broad a strategic perspective and then a tactical accomplishment perspective as you possibly can.

1. If audience size, reach, and future marketing benefits are greater than detailed leads as a metric or as a value, then you should go open access. If the reverse is true, if detailed leads are more valuable to you than the audience size, the potential reach, the amplification and link benefits, and all the future marketing benefits that come from those things, the ranking benefits and SEO benefits, if that’s the case, then you should go with a gated model. You get lots of people at an open access model. You get one person, but you know all their information in a gated content model.

2. It is not the case that this has to be completely either/or. There are modified ways to do both of these tactics in combination and concert. In fact, that can be potentially quite advantageous.

So a semi-gated model is something we’ve seen a few content marketers and companies start to do, where they have a part of the report or some of the most interesting aspects of the report or several of the graphics or an embedded SlideShare or whatever it is, and then you can get more of the report by filling in more items. So they’re sharing some stuff, which can potentially attract engagement and links and more amplification, and use in all sorts of places and press, and blog posts and all that kind of stuff. But then they also get the benefit of some people filling out whatever form information is critical in order to get more of that data if they’re very interested. I like this tease model a lot. I think that can work really, really well, especially if you are giving enough to prove your value and worth, and to earn those engagement and links, before you ask for a lot more.

You can go the other way and go a completely open model but with add-ons. So, for example, in this, here’s the full report on AI. If you would like more information, we conducted a survey with AI practitioners or companies utilizing AI. If you’d like the results of that survey, you can get that, and that’s in the sidebar or as a little notification in the report, a call to action. So that’s full report, but if you want this other thing that maybe is useful to some of the folks who best fit the interested in this data and also potentially interested in our product or service, or whatever we’re trying to get leads for, then you can optionally put your information in.

I like both of these. They sort of straddle that line.

3. No matter which one or which modified version you do, you should try and optimize the outcomes. That means in an open content model:

  • Don’t ignore the fact that you can still do retargeting to all the people who visited this open content and get them back to your website, on to potentially a very relevant offer that has a high conversion rate and where you can do CRO testing and those kinds of things. That is completely reasonable and something that many, many folks do, Moz included. We do a lot of remarketing around the web.
  • You can drive low-cost, paid traffic to the content that gets the most shares in order to bump it up and earn more amplification, earn more traffic to it, which then gives you a broader audience to retarget to or a broader audience to put your CTA in front of.
  • If you are going to go completely gated, a lot of these form fields, you can infer or use software to get and therefore get a higher conversion rate. So for example, I’m asking for name, email, role, company, website, Twitter, and LinkedIn. In fact, I could ask exclusively for LinkedIn and email and get every single one of those from just those two fields. I could even kill email and ask them to sign in with LinkedIn and then request the email permission after or as part of that request. So there are options here. You can also ask for name and email, and then use a software service like FullContact’s API and get all of the data around the company, website, role and title, LinkedIn, Twitter, Facebook, etc., etc. that are associated with that name or in that email address. So then you don’t have to ask for so much information.
  • You can try putting your teaser content in multiple channels and platforms to maximize its exposure so that you drive more people to this get more. If you’re worried that hey this teaser won’t reach enough people to be able to get more of those folks here, you can amplify that through putting it on SlideShare or republishing on places like Medium or submitting the content in guest contributions to other websites in legit ways that have overlapped audiences and share your information that you know is going to resonate and will make them want more. Now you get more traffic back to these pages, and now I can convert more of those folks to the get more system.

So content gating, not the end of the world, not the worst thing in the world. I personally dislike a lot of things about it, but it does have its uses. I think if you’re smart, if you play around with some of these tactical tips, you can get some great value from it.

I look forward to your ideas, suggestions, and experiences with content gating, and we’ll see you next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

More Articles

Posted in Latest NewsComments Off

Darren Rowse Opens Up About the Strategy Behind His New Show

sr-darren-rowse-podcast

People have been trying to get Darren Rowse to podcast for years. How can the founder of Problogger not have a podcast … right?

Now he does. And he joins The Showrunner hosts Jerod Morris and Jon Nastor for an inside look at the launch strategy behind his long-awaited show.

To start, they take you behind their strategy for conducting a 2-on-1 interview. It can be tricky and often requires planning and organization to do well.

Then Darren joins Jerod and Jon all the way from Melbourne, Australia to let them inside his mind during the week of his podcast launch.

They ask him a number of questions, including:

  • Why did he finally decide to start a podcast?
  • What have been Darren’s two biggest challenges in launching a podcast — despite all of his experience and the size of his audience?
  • How is he managing the production and publishing process for his 31-episodes-in-31-days launch strategy?
  • Why did Darren design his podcast launch around repurposed content?
  • How did his show’s current sponsorship come about? (And was he always planning on having a sponsor?)
  • Would Darren have fully sponsored his own show (like Rainmaker.FM does) if he had not secured a sponsor?
  • What advice would he share with fellow showrunners who want to follow his model of launching a podcast that has intrinsic, direct, and indirect profitability from the start?
  • And … once the initial daily launch is over, what will the regular schedule be?

As you’d expect, Darren provides thorough and insightful answers.

Click Here to Listen to

The Showrunner on iTunes

Click Here to Listen on Rainmaker.FM

About the author

Rainmaker.FM

Rainmaker.FM is the premier digital marketing and sales podcast network. Get on-demand business advice from experts, whenever and wherever you want it.

The post Darren Rowse Opens Up About the Strategy Behind His New Show appeared first on Copyblogger.


Copyblogger

Posted in Latest NewsComments Off

Sunshine State lagging behind in solar energy




style="display:inline-block;width:250px;height:250px"
data-ad-client="ca-pub-7815236958543991"
data-ad-slot="8717335615">

The Sunshine State residents realize that the exploitation of the abundant solar energy in Florida is not an easy task. Industry experts rank third in Florida in the nation in the potential of solar energy roof, but 13 in the amount of solar energy produced.

Experts in renewable energy and solar industry say Florida is being delayed because the state is one of four that require solar energy to be sold exclusively by utilities. This limits the options to go Solar financial commitment and increases in advance for consumers.

The Sierra Club sees the lack of consumer choice as harmful to the environment. Tea sees an attack on personal freedom.

These unlikely allies working on a ballot measure in 2016 that would change the Florida Constitution to loosen the laws of solar energy in the state.

Latest solar news

Posted in Latest NewsComments Off

The Why’s Behind Some Aspects Of Link Building

I recently got an email from a woman who had been reading through the link building articles here on SEOBook, she was new to the community and SEO in general and had questions she was shy about asking in our forum.   I’ve answered her directly but thought her questions were good and commonly asked so I wanted to share my responses in case someone else would benefit.

While I know her first name, I don’t know what industry she is in or the name of her site so my answers will be given in general. Here’s the first one::

Question: I’m trying to learn about link building and am going to try an article content creation tool. Where should I put my articles - can I put multiple articles on one blog site and each will act as a link or does only one article per blog website count as a link?  .

Before I answer, I thought I’d provide some background information on a couple of key concepts as they relate to the question and linking in general.

Writing articles is a common and basic link building method; most articles are between 400 – 700 words and use a couple keyword terms in the copy.  Articles created by automated content tools don’t win Pulitzer prizes and aren’t meant to; they’re written and dropped as a way to secure a lot links which hopefully pass link popularity or “link juice”.  Overall the tactic still works but works best when the content is dropped on “quality” pages.

What’s a quality page?  In a nutshell it’s a page ranking well for certain keyword phrases, has some age behind it and an active social profile.  Pages rank for a number of reasons, suffice to say if it’s ranking well, it’s doing something right and is a good place to secure links from. It’s hard to definitely say the social aspect of things causes’ great algorithmic impact but my sense is this issue is being given more weight than we’re being told; it’s just damn hard to prove. Plus, from a traffic and exposure point social can be huge; a site/blog with an active Twitter/Facebook presence is an asset, and one that can work to your advantage.

If you’re using article marketing and content creation tools as a way to attract links, you’re probably not going to create the type of content quality sites want to host.  The type of content those tools spit out tend to end up on low-quality blogs and/or in article directories, neither has much algorithmic weight behind them so you don’t get the link popularity or content citations you’re vying for.  Why?  To understand the “why” behind the question, we need to understand what link popularity is and how it’s used to influence the way your pages rank.

Link Popularity

In its basic form, link popularity is comprised of three components and one influencing factor:  link quantity, link quality, relevance and anchor text.

  • Link quantity - the number of links pointing to a specific webpage.   Having lots of links is a good thing. :)
  • Link quality - quality is determined by the authority of the host pages/sites and the pages/sites linking to them.  Quality flows from one page to the next through links.  Most people know this factor as PageRank, (TrustRank for Yahoo and not sure what Bing calls it)
  • Anchor text - this is the clickable part of the link you see, it’s a query ranking indicator and an endorsement, it tells both humans and bots what is about to come.  Anchors using keyword phrases provide additional “weight” and carry semantic value,  Google doesn’t spell out much for us when it comes to the importance of ranking influences but they have in the case of anchor text:

“Anchor text influences the queries your site ranks for in the search results.”

While the comment above was made in 2007 and recent events might make it seem like anchors are no longer a key ranking component that just isn’t the case.   Anchor text itself is the not problem when it comes to poor rankings, aggressive webmasters are.  It’s not smart to use the same anchor over and over, it never has been.   From a marketing and SEO standpoint it’s best to use a wide range of anchors and to use them sparingly.  If it doesn’t make sense to hyperlink a keyword phrase in your content – don’t.   Nothing says “SEO article here” like multiple hyperlinked keyword anchors in the middle that lead to the same page or pages that don’t support the conversation.

Make your content and your anchors conversational, if it makes sense to link out, do it.  There’s nothing wrong with hyperlinking a “click here” or “for more information” in the body of your copy, it helps with the flow of information and to mix up your anchors.

Relevance

Links to and from contextually relevant or thematically related sites/pages are supposed to convey more authority, relevance helps establish where you belong topically and/or geographically.  You don’t have to get links from pages in your keyword niches but it helps.  Why?  From an editorial standpoint, webmasters in the same/ancillary areas are more likely to link to other webmasters or pages that support their content.  Like attracts like, the concept is the same here.

The relevance component can be a key factor in the phenomena known as ”negative SEO”. If you’re not familiar with the issue, read here and if you are, you know how easy it can be to have this happen to you.  If you’ve always linked along in your topical and/or geographic niche and someone comes at you with tons of off topic backlinks, being able to fight back/defend your link history becomes easier.  Stick to getting links from pages your demographic frequents and follow your history patterns.

Now that we have the link popularity explanations and support information out of the way, let’s go back to the original question:

Question:  Where should I put my articles – can I put multiple articles on one blog site and each will act as a link or does only one article per blog website count as a link?

Link building is less about what you do, and more about
where you do it.  Ideally you want to find:

  • a lot of pages (link quantity)
  • with high visible PageRank scores (link quality)
  • using keyword anchors (anchor text) 
  • on topically or
    geographically relevant pages (relevance) ranking well.

Sound familiar?  Problem is, hitting all four points is not easy, even for a seasoned linker.  There is a very high probability quality blogs won’t take basic/respun/or tool generated content, they have reputations and readership to satisfy.  You’ll have to go to a blog with a less discriminating palate and offer your content.  As long as the blog and your post are in the index, you will receive some measure of link popularity but less than what you’d get from a well ranked topical blog.  In link building, the ultimate goal is to get your links on pages ranking well for whatever terms you are targeting.  Simple in theory, not so easy in reality so always strive to hit as many of the four link pop factors outlined for maximum results.

There’s nothing wrong with hosting multiple articles on the same site or blog but it’s never a good idea to put too many link eggs in one blog basket.  Spread the wealth, preferably on blogs within your area. You will have a wider audience and expand your link and social graph which works to help you algorithmically.

Next question:

2) In which way should I spread my created articles across blog websites – am I correct in thinking duplicate use of article is a bad thing – each one should be unique?

If you have the time and resources to develop unique articles, that is your best course of action.  If you don’t, reusing content is fine as long as it’s different enough that anyone reading it won’t be able to quote a sentence verbatim. The engines frown on content spread around for ranking purposes, Google has a page on this subject here.  To be safe, freshen up your content with new material each time you drop it, include new images and video, change up the anchors and where they point.

3) Do keywords through an article’s/blog’s text (on a blog site not the promoted website)have any impact for link building or do only the keywords I attach to the posting matter?

To be honest, I’m not 100% clear on what this question is asking so I’ll answer about the impact a keyword anchor has when sitting on someone else’s page.

Words on a blog/site are considered content, even if the words are hyperlinked.  Your keyword anchor is content for the page it sits on and also a query indicator for the page it points to.  The page the link points to gets the bigger ranking bang because the query indicator is more important to the ranking process. If you hyperlink “click here” instead of using a keyword rich phrase, you lose the influence for the keyword but the engine will still follow the hyperlinks and make the connection between the pages.   It’s highly probable the term “click here” is seen as frivolous content on the site and does not add to the relevance factor.

Even though a lot of people feel anchors have been devalued lately, I don’t;  I think the dial on the number of times the anchor is used and how it’s used has been turned up. Way up.

Use all of your terms and their variations along with company and surnames, hyperlink verbs and call to action phrases so you motivate people to click. Above all, hyperlink words in a sentence when it makes sense and then link to content that reinforces what you’re saying.  Link to off topic content too many times and people stop clicking and reading. 

Anchors and on-page content are not the only ranking influences an engine uses, they each have multiple factors which include social and user-interactions. It’s best to use a wide range of tactics when you link and keep the four points of link popularity in mind as you work.  While it is best to try and link between two topically or geographically related pages to reinforce your intent, unrelated linking won’t hurt, it just doesn’t help as much.

Thanks for submitting your questions Laura, hope this helps :)


Debra Mastaler is a long time link building & publicity expert who has trained clients for over a decade at Alliance-Link. She is the link building moderator of our SEO Community & can be found on Twitter @DebraMastaler.

Categories: 

SEO Book.com

Posted in Latest NewsComments Off

Behind The Scenes In Google’s Battle Against Bad Ads

When new employees join the Google ad quality team that manually reviews suspect ads, they start by studying internal documentation of policies that outline examples of ads that would be approved, and those that would be rejected. Then the employees’ skills are tested on ads that don’t…



Please visit Search Engine Land for the full article.




Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Related Articles

Posted in Latest NewsComments Off

The Strategy Behind the Copyblogger Redesign

Internet Marketing for Smart People Radio Logo

Applying web design to business goals is a serious business.

A few weeks ago — through the hard work and formidable talent of Rafal Tomal (our Lead Designer) — we launched a redesign of copyblogger.com.

There’s been a good bit of discussion around it, and a lot of good questions.

What’s changed? Why did we change it? And the big one — how is it all working out for the business?

Let’s answer those questions, and a lot more about the business end of design, right now …

In this episode Brian, Sonia, and I discuss:

  • Why redesign the site now? Why do it at all?
  • The 3 major goals of the Copyblogger redesign
  • Specific conversion stats, including one that hit 92%
  • How to think about design as it relates to business
  • The specific thinking behind each element of the design
  • The evolution of Copyblogger from blog to multi-million dollar business

Hit the flash player below to listen now:

Other listening options:

The Show Notes:

About the Author: Robert Bruce is Copyblogger Media’s copywriter and resident recluse. Get him on Twitter or Google+.

Share

Copyblogger

Posted in Latest NewsComments Off

Advert