Tag Archive | "Methods"

12 Methods to Get from Blank Page to First Draft

If you’re like me, after taking some time off from writing, you’re refreshed and champing at the bit to translate…

The post 12 Methods to Get from Blank Page to First Draft appeared first on Copyblogger.


Copyblogger

Posted in Latest NewsComments Off

3 Methods Fueled by Data and Tools to Earn More (and Better) Links – Whiteboard Friday

Posted by randfish

Most conversations about links today involve terms like “better links,” or “high-quality links.” Those are the kinds we all hope to earn, but what exactly defines a “better link?” How do we know whether a link qualifies, or is only so-so?

In today’s Whiteboard Friday, Rand clears up the confusion and offers a few clear attributes of better links, walking us through three great ways to find them.










PRO Tip: Learn more about reclaiming links at Moz Academy.

For reference, here’s a still of this week’s whiteboard!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. Today I’m going to talk a little bit about some data and tools-fueled methodologies to acquire more and better links and, in fact, some links that you may not have been able to find in other ways. So I’ll start by saying what does it mean to have a better link? Well, I mean really three things.

(A) Editorially given. By that I mean not a link that you go buy. Not a link that you go, sort of, acquire or leave on someone’s site unbeknownst to them or get listed in a directory. I mean an editorially given link in that the person who is giving the link runs the website or at least the page where it’s being given from, and they intended to link to you and want to link to you and it’s out of no other desire other than to share your site or the content that you have, the work that you are doing. They have a relationship with you, they like you, they want to recommend you. Editorially given.

(B) From a high-quality, trusted, and trafficked, well-trafficked website, something that actually might get you clicks in addition to providing link value from a search ranking perspective.

And (C) you’ve actually got a half-decent shot of getting that link. If I’m just showing you link methodologies that are going to show you, “Oh, yeah, it’d be real nice to get a link on that Whitehouse.gov page,” it’s not going to happen, man. Bad news, that’s going to be a tough one.

But these three, if we aim for these three, in particular aim for a decent shot at getting it, I think we get some good ones out of this.

So method number one, follower outreach, essentially, the practice of outreach for links, reaching out to someone and saying, “Hey, we have this piece of content you might like” or “We have this potential relationship we could build” or “Hey, I notice that you do some things that are interesting and maybe we could have some overlap here. Perhaps I could contribute in some way to something that you’re doing.”

Cool, works a lot of the time. But it’s very hit or miss. Except that the odds go way higher, way in your favor if you actually have a relationship, a pre-existing knowledge of one another and a mutual “like-and-respect” situation. That’s why outreach to followers, to people who actually already know you and like you is way more effective.

So this is Followerwonk. You could use a tool of your choice. You might find people on Plus or some of the other social metrics tools.

But Followerwonk, I can go right in here, and on the Sort Followers tab, once I’ve logged in, I can sort my followers and say, “Show me a list of them.” Then I can export to CSV. The only trick, once I export to CSV, I’m looking for people with high social authority who have websites that I might want to do outreach to, and this is such a simple thing. If you want, you can get a little fancier. You can do things like put data in here, add a column and use Richard Baxter’s Mozscape plug-in, so that you can filter by domain authority of the website that’s in their bio and only outreach to people who haven’t already linked to you.

But, generally speaking, I’ve found that even if somebody’s linking to you from one page, doing outreach to them, getting that second link, reaching out to folks, especially when you’ve targeted some of these people, this is huge value. I’ve seen outreach of this kind work tremendously well, especially because since they already know you, this guy and some dude in marketing are like, “They’re all following me. They’re following my account. That means they care about what I have to say.”

So if I outreach them and they say “Oh, yeah I checked out, I know something about them too. I’ve got their bio. I know what site they represent. I know who they are. I can interact with them on Twitter.” This works wonderfully. This is one of my favorite, favorite outreach methodologies. It starts with social.

Method two: Just-discovered competition. So many of you are probably already aware, but in Open Site Explorer, there’s this new tab called Just Discovered Links, way over on the right. It’s technically in beta, but it gets a lot of great links. It surfaces a lot of great links that are pointing to your website or to a competitor’s website.

This is the key. What I want you to do is go plug in a competitor. Start with just one, one of your competitor’s websites. Go over to the Just Discovered tab, and take a look at what people are writing about them and linking to them right now. I try and go for direct competitors, the kind of competitors where it seems like a surprise if an editorial, like a news publication or a blogger or someone in the field, an industry thought leader writes about them, but doesn’t write about you. That’s always like, “Oh, if you’re going to mention one, you should mention several.”

This is where the key comes in, because you go here and you look at stuff that was literally just published in the last few hours or couple of days, and then you do the outreach right then. You could do it through commenting and just saying something about yourself like, “Hey, I’m not going to link drop because I don’t want to be spammy, but if you haven’t already checked out Moz, we’re a competitor to site XYZ, and we’d love to connect and follow up. Maybe you’d be interested in writing a story about some of the stuff that we’re doing. I’d happy to fill you in. Reach out to me at Rand@Moz.” Something like that.

Or you could go find their e-mail contact information if you don’t want to make it public in the comments and reach out in that way. The trick is because these things have just been written, just been published, your outreach attempts go way higher. And you can look at domain authority. You can sort in order of domain authority. So you can sort of look at and say, “Oh, yeah, I don’t want to reach out to that guy, but yes, yes, yes.” Ideal.

Methodology number three: “Why you no link? Why?” I’ll show you what I’m talking about.

So this is Fresh Web Explorer. You could use another service. You could use Mention.net. By the way, I don’t mean to say that Open Site Explorer is the only way to do this. You could use Majestic or something like that for this same thing, if you’re not a Moz subscriber. But assuming you are, all three of these are part of your subscription.

So Fresh Web Explorer, I can go in and search for, this is key. I know the Fresh Web Explorer search query, it’s sort of like the Yahoo! of old, where’d you do like very sophisticated links types of searches. So make sure you’re familiar with all the modifiers. But this one, in particular, I love. It’s Moz, my brand name, minus RD:moz.com. There’s a space in between here, but no space otherwise.

The reason this works so well is because I’m essentially saying, “Show me people who have mentioned my brand name, Moz, but are not linking to any page on my site, and show me the ones that have just done that.” Because this is Fresh Web Explorer, so it’s going to show me recent stuff. Then, if I want, I can click on a specific day or those kinds of things. I can export the CSV over here.

But, basically, I look at these and I go, “Huh. Interesting. So this is four days old. They mentioned Moz, but they didn’t link to us. Man, that’s a good, reasonable feed authority.” You can get domain authority as well in the CSV. “Man, I should reach out to them. That reporter, that blogger, that writer, that person who owns that website, why did they talk about me and not link to my site?”

It tends to be the case that this is just oversight. And if you just reach out and are like, “Hey, I loved that you covered us, really appreciated it. By the way, noticed you didn’t link. Was that intentional? Could we get a link back?” Boom. It’s just super easy, high-quality link building right off the bat.

These three methodologies will all help you with those. And for those of you who are doing link-building on a regular basis, I love this format. Whether you use our tools or someone else’s, it’s a great way to go.

All right, everyone. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Comparing Rank-Tracking Methods: Browser vs. Crawler vs. Webmaster Tools

Posted by Dr-Pete

Deep down, we all have the uncomfortable feeling that rank-tracking is unreliable at best, and possibly outright misleading. Then, we walk into our boss’s office, pick up the phone, or open our email, and hear the same question: “Why aren’t we #1 yet?!” Like it or not, rank-tracking is still a fact of life for most SEOs, and ranking will be a useful signal and diagnostic for when things go very wrong (or very right) for the foreseeable future.

Unfortunately, there are many ways to run a search, and once you factor in localization, personalization, data centers, data removal (such as [not provided]), and transparency (or the lack thereof), it’s hard to know how any keyword really ranks. This post is an attempt to compare four common rank-tracking methods:

  1. Browser – Personalized
  2. Browser – Incognito
  3. Crawler
  4. Google Webmaster Tools (GWT)

I’m going to do my best to keep this information unbiased and even academic in tone. Moz builds rank-tracking tools based in part on crawled data, so it would be a lie to say that we have no skin in the game. On the other hand, our main goal is to find and present the most reliable data for our customers. I will do my best to present the details of our methodology and data, and let you decide for yourselves.

Methodology

We started by collecting a set of 500 queries from Moz.com’s Google Webmaster Tools (GWT) data for the month of July 2013. We took the top 500 queries for that time period by impression count, which provided a decent range of rankings and click-through rates. We used GWT data because it’s the most constrained rank-tracking method on our list – in other words, we needed keywords that were likely to pop up on GWT when we did our final data collection.

On August 7th, we tracked these 500 queries using four methods:

(1) Browser – Personalized

This is the old-fashioned approach. I personally entered the queries on Google.com via the Chrome browser (v29) and logged into my own account.

(2) Browser – Incognito

Again, using Google.com on Chrome, I ran the queries manually. This time, though, I was fully logged out and used Chrome’s incognito mode. While this method isn’t perfect, it seems to remove many forms of personalization.

(3) Crawler

We modified part of the MozCast engine to crawl each of the 500 queries and parse the results. Crawls occurred across a range of IP addresses (and C-blocks), selected randomly. The crawler did not emulate cookies or any kind of login, and we added the personalization parameter (“&pws=0”) to remove other forms of personalization. The crawler also used the “&near=us” option to remove some forms of localization. We crawled up to five pages of Google results, which produced data for all but 12 of the 500 queries (since these were queries for which we knew Moz.com had recently ranked).

(4) Google Webmaster Tools

After Google made data available for August 7th, we exported average position data from GWT (via “Search Traffic” > “Search Queries”) for that day, filtering to just “Web” and “United States”, since those were the parameters of the other methods. While the other methods represent a single data point, GWT “Avg. position” theoretically represents multiple data points. Unfortunately, there is very little transparency about precisely how this data is measured.

Once the GWT data was exported and compared to the full list, there were 206 queries left with data from all four rank-tracking methods. All but a handful of the dropped keywords were due to missing data in GWT’s one-day report. Our analyses were conducted on this set of 206 queries with full data.

Results: Correlations

To compare the four ranking methods, we started with the pair-wise Spearman rank-order correlations (hat tip to my colleague, Dr. Matt Peters, for his assistance on this and the following analysis). All correlations were significant at the p<0.01* level, and r-values are shown in the table below:

*Given that the ranking methods are analogous to a repeated analysis of the same data set, we applied the Bonferroni correction to all p-values.

Interestingly, almost all of the methods showed very strong agreement, with Personalized vs. Incognito showing the most agreement (not surprisingly, as both are browser-based). Here’s a scatterplot of that data, plotted on log-log axes (done only for visualization’s sake, since the rankings were grouped pretty tightly at the upper spots):

Crawler vs. GWT had the lowest correlation, but it’s important to note that none of these differences were large enough to make a strong distinction between them. Here’s the scatterplot of that correlation, which is still very high/positive by most reasonable standards:

Since the GWT “Average” data is precise to one decimal point, there’s more variation in the Y-values, but the linear relationship remains very clear. Many of the keywords in this data set had #1 rankings in GWT, which certainly helped boost the correlations, but the differences in the methods appear to be surprisingly low.

If you’re new to correlation and r-values, check out my quick refresher: the correlation “mathographic”. The statement “p<0.01″ means that there is less than a 1% probability that these r-values were the result of random chance. In other words, we can be 99% sure that there was some correlation in play (and it wasn’t zero). This doesn’t tell us how meaningful the correlation is. In this particular case, we’re just comparing sets of data to see how similar they are – we’re not making any statements about causation.

Results: Agreement

One problem with the pair-wise correlations is that we can only compare any one method to another. In addition, there’s a certain amount of dependence between the methods, so it’s hard to determine what a “strong” correlation is. During a smaller, pilot study, we decided that what we’re really interested in is how any given method compares to the totality of the other three methods. In other words, which method agrees or disagrees the most with the rest of the methods?

With the help of Dr. Peters, I created a metric of agreement (or, more accurately, disagreement). I’ll save the full details for Appendix A at the end of this article, but here’s a short version. Let’s say that the four methods return the following rankings (keeping in mind that GWT is an average):

  1. 2
  2. 1
  3. 1
  4. 2.8

Our disagreement metric produces the following values for each of the methods:

  1. 2.89
  2. 2.34
  3. 2.34
  4. 3.58

Since the two #1 rankings show the most agreement, methods (2) and (3) have the same score, with method (1) showing more disagreement and (4) showing the most disagreement. The greater the distance between the rankings, the higher the disagreement score, but any rankings that match will have the same score for any given keyword.

This yielded a disagreement score for each of the four methods for each of the 206 queries. We then took the mean disagreement score for each method, and got the following results:

  1. Personal = 1.12
  2. Incognito = 0.82
  3. Crawler = 0.98
  4. GWT = 1.26

GWT showed the highest average disagreement from the other methods, with incognito rankings coming in on the low end. On the surface, this suggests that, across the entire set of methods, GWT disagreed with the other three methods the most often.

Given that we’ve invented this disagreement metric, though, it’s important to ask if this difference is statistically significant. This data proved not to be normally distributed (a chunk of disagreement=0 data points skewed it to one side), so we decided our best bet for comparison was the non-parametric Mann-Whitney U Test.

Comparing the disagreement data for each pair of methods, the only difference that approached statistical significance was Incognito vs. GWT (p=0.022). Since I generally try to keep the bar high (p<0.01), I have to play by my own rules and say that the disagreement scores were too close to call. Our data cannot reliably tell the levels of disagreement apart at this point.

Results: Outliers

Even if the statistics told us that one method clearly disagreed more than the other methods, it still wouldn’t answer one very important question – which method is right? Is it possible, for example, that Google Webmaster Tools could disagree with all of the other methods, and still be the correct one? Yes, it’s within the realm of possibility.

No statistic will tell us which method is correct if we fundamentally distrust all of the methods (and I do, at least to a point), so our next best bet is to dig into some of the specific cases of disagreement and try to sort out what’s happening. Let’s look at a few cases of large-scale disagreement, trying not to bias toward any particular method.

Case 1 – Personalization Boost

Many of the cases where personalization disagreed are what you’d expect – Moz.com was boosted in my personalized results. For example, a search for “seo checklist” had Moz.com at #3 in my logged-in results, but #7 for both incognito and crawled, and an average of 6.7 for GWT (which is consistent with the #7 ballpark). Even by just clicking personalization off, Moz.com dropped to #4, and in a logged out browser a few days after the original data collection, it was at #5.

What’s fascinating to me is that personalization didn’t disagree even more often. Consider that all of these queries were searches that generated traffic for Moz.com and I’m on the site every day and very active in the SEO community. If personalization has the impact we seem to believe it has, I would theorize that personalized searches would disagree the most with other methods. It’s interesting that that wasn’t the case. While personalization can have a huge impact on some queries, the number of searches it affects still seems to be limited.

Case 2 – Personalization Penalty

In some cases, personalization actually produced lower rankings. For example, a search for “what is an analyst” showed Moz.com at the #12 position for both personalized and incognito searches. Meanwhile, crawled rankings put us at #3, and GWT’s average ranking was #5. Checking back (semi-manually), I now see us at #10 on personalized search and up to #2 for crawled rankings.

Why would this happen? Both searches (personalized vs. crawled) show a definition box for “analyst” at the top, which could indicate some kind of re-ranking in play, but the top 10 after that box differ by quite a bit. One would naturally assume that Moz.com would get a boost in any of my personalized searches, but that’s simply not the case. The situation is much more complex and real-time than we generally believe.

Case 3 – GWT (Ok, Google) Hates Us

Here’s one where GWT seems to be out of whack. In our one-day data collection, a search for “seo” showed Moz at #3 for personalized rankings and #4 for incognito and crawled. Meanwhile, GWT had us down in the #6 spot. It’s not a massive difference, but for such an important head keyword, it definitely could lead to some soul-searching.

As of this writing, I was showing Moz.com in the #4 spot, so I called in some help via social media. I asked people to do a logged-in (personalized) search for “seo” and report back where they found Moz.com. I removed data from non-US participants, which left 63 rankings (36 from Twitter, and 27 from Facebook). The reported rankings ranged from #3 to #8, with an average of 4.11. These rankings were reported from across the US, and only two participants reported rankings at #6 or below. Here’s the breakdown of the raw data:

You can see the clear bias toward the #4 position across the social data. You could argue that, since many of my friends are SEOs, we all have similarly biased rankings, but this quickly leads to speculation. Saying that GWT numbers don’t match because of personalization is a bit like saying that the universe must be made of dark matter just because the numbers don’t add up without it. In the end, that may be true, but we still need the evidence.

Face Validity

Ultimately, this is my concern – when GWT’s numbers disagree, we’re left with an argument that basically boils down to “Just trust us.” This is difficult for many SEOs, given what feels like a concerted effort by Google to remove critical data from our view. On the one hand, we know that personalization, localization, etc. can skew our individual viewpoints (and that browser-based rankings are unreliable). On the other hand, if 56 out of 63 people (89%) all see my site at #3 or #4 for a critical head term and Google says the “average” is #6, that’s a hard pill to swallow with no transparency around where Google’s number is coming from.

In measurement, we call this “face validity”. If something doesn’t look right on the surface, we generally want more proof to sort out why, and that’s usually a reasonable instinct. Ultimately, Google’s numbers may be correct – it’s hard to prove they’re not. The problem is that we know almost nothing about how they’re measured. How does Google count local and vertical results, for example? What/who are they averaging? Is this a sample, and if so, how big of a sample and how representative? Is data from [not provided] keywords included in the mix?

Without these answers, we tend to trust what we can see, and while we may be wrong, it’s hard to argue that we shouldn’t. What’s more, it’s nearly impossible to convince our clients and bosses to trust a number they can’t see, right or wrong.

Conclusions

The “good” news, if we’re being optimistic, is that the four methods we considered in this study (Personalized, Incognito, Crawler, and GWT) really didn’t differ that much from each other. They all have their potential faults, but in most cases they’ll give you an answer that’s in the ballpark of reality. If you focus on relative change over time and not absolute numbers, then all four methods have some value, as long as you’re consistent.

Over time, this situation may change. Even now, none of these methods measure anything beyond core organic ranking. They don’t incorporate local results, they don’t indicate if there are prominent SERP features (like Answer Boxes or Knowledge Graph entries), they don’t tell us anything about click-through or traffic, and they all suffer from the little white lie of assumed linearity. In other words, we draw #1 – #10, etc. on a straight line, even though we know that click-through and impact drop dramatically after the first couple of ranking positions.

In the end, we need to broaden our view of rankings and visibility, regardless of which measurement method we use, and we need to keep our eyes open. In the meantime, the method itself probably isn’t critically important for most keywords, as long as we’re consistent and transparent about the limitations. When in doubt, consider getting data from multiple sources, and don’t put too much faith in any one number.

Appendix A: Measuring Disagreement

During a pilot study, we realized that, in addition to pair-wise comparisons of any two methods, what we really wanted to know was how any one method compared to the rest of the methods. In other words, which methods agreed (or disagreed) the most with the set of methods as a whole? We invented a fairly simple metric based on the sum of the differences between each of the methods. Let’s take the example from the post – here, the four methods returned the following rankings (for Keyword X):

  1. 2
  2. 1
  3. 1
  4. 2.8

We wanted to reward methods (2) and (3) for being the most similar (it doesn’t matter that they showed Keyword X in the #1 position, just that they agreed), and slightly penalize (1) and (4) for mismatching. After testing a few options, we settled (I say “we”, but I take full blame for this particular nonsense) on calculating the sum of the square roots of the absolute differences between each method and the other three methods.

That sounds a lot more complicated than it actually is. Let’s calculate the disagreement score for method 1, which we’ll call “M1″ (likewise, we’ll call the other methods M2, M3, and M4). I call it a “disagreement” score because larger values ended up representing lower agreement. For M1 for Keyword X, the disagreement score is calculated by:

sqrt(abs(M1-M2)) + sqrt(abs(M1-M3)) + sqrt(abs(M1-M4))

The absolute value is used because we don’t care about the direction of the difference, and the square root is essentially a dampening function. I didn’t want outliers to be overemphasized, or one bad data point for one method could potentially skew the results. For Method 1 (M1), then, the disagreement value is:

sqrt(abs(2-1)) + sqrt(abs(2-1)) + sqrt(abs(2-2.8))

…which works out to 2.89. Here are the values for all four methods:

  1. 2.89
  2. 2.34
  3. 2.34
  4. 3.58

Let’s look at a couple of more examples, just so that you don’t have to take my word for how this works. In this second case, two methods still agree, but the ranking positions are “lower” (which equates to larger numbers), as follows:

  1. 12
  2. 12
  3. 3
  4. 5

The disagreement metric yields the following values:

  1. 5.65
  2. 5.65
  3. 7.41
  4. 6.71

M1 and M2 are in agreement, so they have the same disagreement value, but all four values are elevated a bit to show that the overall distance across the four methods is fairly large. Finally, here’s an example where two methods each agree with one other method:

  1. 2
  2. 2
  3. 5
  4. 5

In this case, all four methods have the same disagreement score:

  1. 3.46
  2. 3.46
  3. 3.46
  4. 3.46

Again, we don’t care very much that two methods ranked Keyword X at #2 and two at #5 – we only care that each method agreed with one other method. So, in this case, all four methods are equally in agreement, when you consider the entire set of rank-tracking methods. If the difference between the two pairs of methods was larger, the disagreement score would increase, but all four methods would still share that score.

Finally, for each method, we took the mean disagreement score across the 206 keywords with full ranking data. This yielded a disagreement measurement for each method. Again, these measurements turned out not to differ by a statistically significant margin, but I’ve presented the details here for transparency and, hopefully, for refinement and replication by other people down the road.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

Site Search Solutions: 3 methods for implementing search on your site

As your site grows in size, choosing the right site search method will become an important factor in keeping your content available for visitors. Learn about three site search implementation methods that you can use to help keep your site running smoothly.
MarketingSherpa Blog

Posted in Latest NewsComments Off

5 Methods for Connecting Online and Offline Marketing

offline and online marketingInbound marketing is awesome, but let’s be honest: many marketers are still doing a mix of inbound and outbound marketing. The challenge in using both online and offline marketing tactics is integrating them in an effort to generate even better results than just one tactic would have experienced on its own.

At first, this idea might seem contradictory. How do online and offline marketing work together? One key factor is analytics, among others. Check this list for some of the best ways to connect online and offline marketing.

5 Methods for Connecting Online and Offline Marketing

1. Tracking URLs – The web is great for analytics. When using offline tactics like print advertising or outdoor advertising, be sure to use unique tracking URLS for the URLs you add within each separate advertisement and placement. These URLs will serve as redirects that your web analytics would track, but send the visitors all to one core page with your central offer. This method allows you to understand which segment of your conversions are from your offline tactics.

2. Social Media Driving Offline Traffic – Do you exhibit at tradeshows? How do you get traffic to your booth? Sure, giveaways and spending lots of money is one way, but why not supplement that with some online promotion using social media and your corporate blog to promote your presence at the tradeshow? Use your online reach to educate people why they should stop by and connect with your team in an offline situation. Consider offering something exclusive to social media followers who stop by your booth.

3. QR Codes – Mobile technology is huge. One aspect of mobile that is gaining traction with marketers is QR codes. These 2 dimensional barcodes allow someone in an offline situation to use their mobile phone to scan a code that automatically performs a specific action such as taking them to a website, showing them a video, sending them a text message, etc. QR codes can be a powerful tool to link offline and online efforts. Read more about what you should know about QR codes in a recent article we published.

4. Offline Reach Building – Do you include URLs for your social media accounts in your offline marketing materials? You should. When working to build online reach, including your account information in offline materials can help inform potential social media connections who may have never known about your online content. The next time you are printing brochures or designing an ad, make sure to include your social media profile URLS (e.g. http://twitter.com/hubspot or http://facebook.com/hubspot) to encourage people who find you offline to follow you online, too. Avoid simply including logos for Twitter and Facebook without providing your URLs. This doesn’t help your business; rather, it’s free advertising for those social networks.

5. Social Media Lead Intelligence – Unfortunately, buying leads and cold calling still happens. If you are still purchasing leads for your sales team, at least help them improve their close rate by teaching them or providing them with online data and background information about the lead. Even if it’s just teaching your sales team how to do search on LinkedIn to identify the lead’s background and interests, these details can be instrumental in helping to build trust with new prospects.

What methods have you used for successfully connecting your offline and online marketing?

Free Ebook: The Essential Step-by-Step Guide to Internet Marketing

Free Ebook: The Essential Step-by-Step Guide to Internet Marketing

Learn how to implement a comprehensive internet marketing strategy, step by step.

Download this free ebook for step-by-step instructions on how to make internet marketing work for your business.

Connect with HubSpot:

HubSpot on Twitter HubSpot on Facebook HubSpot on LinkedIn HubSpot on Google Buzz 

 


HubSpot’s Inbound Internet Marketing Blog

Posted in Latest NewsComments Off


Advert