Tag Archive | "Science"

Katsuko Saruhashi Google doodle honors first woman elected to Science Council of Japan

Born on this day in 1920, Saruhashi was a renowned Japanese geochemist who spent her career advocating for female scientists.

The post Katsuko Saruhashi Google doodle honors first woman elected to Science Council of Japan appeared first on Search Engine Land.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

Understanding the Brain Science Behind Effective Persuasion, with Roger Dooley

un-roger-dooley

The ancient Greeks — notably Aristotle — used anecdotal observation to nail much of what we know about persuasion. The fundamentals of the art haven’t changed much in 2,300 years, because human nature hasn’t changed, even as the context in which we operate has changed dramatically.

In the 20th century, social psychology took the ancient principles of rhetoric and proved them correct in controlled experiments. The work of Dr.Robert Cialdini in particular helped prove the power of authority, social proof, scarcity, and other fundamental aspects of influence.

Now, we have neuroscience. Brain imaging allows us to go beyond observing human response alone, and see which parts of the brain “light up” while responding the way we do in certain situations.

Roger Dooley is the author of Brainfluence: 100 Ways to Persuade and Convince Consumers with Neuromarketing. Today he joins us to reveal the keys to understanding what makes people choose and behave the way we do, along with some cutting-edge thoughts about “tribal” marketing.

Listen to this Episode Now

The post Understanding the Brain Science Behind Effective Persuasion, with Roger Dooley appeared first on Copyblogger.


Copyblogger

Posted in Latest NewsComments Off

Deconstructing the App Store Rankings Formula with a Little Mad Science

Posted by AlexApptentive

After seeing Rand’s “Mad Science Experiments in SEO” presented at last year’s MozCon, I was inspired to put on the lab coat and goggles and do a few experiments of my own—not in SEO, but in SEO’s up-and-coming younger sister, ASO (app store optimization).

Working with Apptentive to guide enterprise apps and small startup apps alike to increase their discoverability in the app stores, I’ve learned a thing or two about app store optimization and what goes into an app’s ranking. It’s been my personal goal for some time now to pull back the curtains on Google and Apple. Yet, the deeper into the rabbit hole I go, the more untested assumptions I leave in my way.

Hence, I thought it was due time to put some longstanding hypotheses through the gauntlet.

As SEOs, we know how much of an impact a single ranking can mean on a SERP. One tiny rank up or down can make all the difference when it comes to your website’s traffic—and revenue.

In the world of apps, ranking is just as important when it comes to standing out in a sea of more than 1.3 million apps. Apptentive’s recent mobile consumer survey shed a little more light this claim, revealing that nearly half of all mobile app users identified browsing the app store charts and search results (the placement on either of which depends on rankings) as a preferred method for finding new apps in the app stores. Simply put, better rankings mean more downloads and easier discovery.

Like Google and Bing, the two leading app stores (the Apple App Store and Google Play) have a complex and highly guarded algorithms for determining rankings for both keyword-based app store searches and composite top charts.

Unlike SEO, however, very little research and theory has been conducted around what goes into these rankings.

Until now, that is.

Over the course of five studies analyzing various publicly available data points for a cross-section of the top 500 iOS (U.S. Apple App Store) and the top 500 Android (U.S. Google Play) apps, I’ll attempt to set the record straight with a little myth-busting around ASO. In the process, I hope to assess and quantify any perceived correlations between app store ranks, ranking volatility, and a few of the factors commonly thought of as influential to an app’s ranking.

But first, a little context

Apple App Store vs. Google Play

Image credit: Josh Tuininga, Apptentive

Both the Apple App Store and Google Play have roughly 1.3 million apps each, and both stores feature a similar breakdown by app category. Apps ranking in the two stores should, theoretically, be on a fairly level playing field in terms of search volume and competition.

Of these apps, nearly two-thirds have not received a single rating and 99% are considered unprofitable. These studies, therefore, single out the rare exceptions to the rule—the top 500 ranked apps in each store.

While neither Apple nor Google have revealed specifics about how they calculate search rankings, it is generally accepted that both app store algorithms factor in:

  • Average app store rating
  • Rating/review volume
  • Download and install counts
  • Uninstalls (what retention and churn look like for the app)
  • App usage statistics (how engaged an app’s users are and how frequently they launch the app)
  • Growth trends weighted toward recency (how daily download counts changed over time and how today’s ratings compare to last week’s)
  • Keyword density of the app’s landing page (Ian did a great job covering this factor in a previous Moz post)

I’ve simplified this formula to a function highlighting the four elements with sufficient data (or at least proxy data) for our analysis:

Ranking = fn(Rating, Rating Count, Installs, Trends)

Of course, right now, this generalized function doesn’t say much. Over the next five studies, however, we’ll revisit this function before ultimately attempting to compare the weights of each of these four variables on app store rankings.

(For the purpose of brevity, I’ll stop here with the assumptions, but I’ve gone into far greater depth into how I’ve reached these conclusions in a 55-page report on app store rankings.)

Now, for the Mad Science.

Study #1: App-les to app-les app store ranking volatility

The first, and most straight forward of the five studies involves tracking daily movement in app store rankings across iOS and Android versions of the same apps to determine any trends of differences between ranking volatility in the two stores.

I went with a small sample of five apps for this study, the only criteria for which were that:

  • They were all apps I actively use (a criterion for coming up with the five apps but not one that influences rank in the U.S. app stores)
  • They were ranked in the top 500 (but not the top 25, as I assumed app store rankings would be stickier at the top—an assumption I’ll test in study #2)
  • They had an almost identical version of the app in both Google Play and the App Store, meaning they should (theoretically) rank similarly
  • They covered a spectrum of app categories

The apps I ultimately chose were Lyft, Venmo, Duolingo, Chase Mobile, and LinkedIn. These five apps represent the travel, finance, education banking, and social networking categories.

Hypothesis

Going into this analysis, I predicted slightly more volatility in Apple App Store rankings, based on two statistics:

Both of these assumptions will be tested in later analysis.

Results

7-Day App Store Ranking Volatility in the App Store and Google Play

Among these five apps, Google Play rankings were, indeed, significantly less volatile than App Store rankings. Among the 35 data points recorded, rankings within Google Play moved by as much as 23 positions/ranks per day while App Store rankings moved up to 89 positions/ranks. The standard deviation of ranking volatility in the App Store was, furthermore, 4.45 times greater than that of Google Play.

Of course, the same apps varied fairly dramatically in their rankings in the two app stores, so I then standardized the ranking volatility in terms of percent change to control for the effect of numeric rank on volatility. When cast in this light, App Store rankings changed by as much as 72% within a 24-hour period while Google Play rankings changed by no more than 9%.

Also of note, daily rankings tended to move in the same direction across the two app stores approximately two-thirds of the time, suggesting that the two stores, and their customers, may have more in common than we think.

Study #2: App store ranking volatility across the top charts

Testing the assumption implicit in standardizing the data in study No. 1, this one was designed to see if app store ranking volatility is correlated with an app’s current rank. The sample for this study consisted of the top 500 ranked apps in both Google Play and the App Store, with special attention given to those on both ends of the spectrum (ranks 1–100 and 401–500).

Hypothesis

I anticipated rankings to be more volatile the higher an app is ranked—meaning an app ranked No. 450 should be able to move more ranks in any given day than an app ranked No. 50. This hypothesis is based on the assumption that higher ranked apps have more installs, active users, and ratings, and that it would take a large margin to produce a noticeable shift in any of these factors.

Results

App Store Ranking Volatility of Top 500 Apps

One look at the chart above shows that apps in both stores have increasingly more volatile rankings (based on how many ranks they moved in the last 24 hours) the lower on the list they’re ranked.

This is particularly true when comparing either end of the spectrum—with a seemingly straight volatility line among Google Play’s Top 100 apps and very few blips within the App Store’s Top 100. Compare this section to the lower end, ranks 401–)500, where both stores experience much more turbulence in their rankings. Across the gamut, I found a 24% correlation between rank and ranking volatility in the Play Store and 28% correlation in the App Store.

To put this into perspective, the average app in Google Play’s 401–)500 ranks moved 12.1 ranks in the last 24 hours while the average app in the Top 100 moved a mere 1.4 ranks. For the App Store, these numbers were 64.28 and 11.26, making slightly lower-ranked apps more than five times as volatile as the highest ranked apps. (I say slightly as these “lower-ranked” apps are still ranked higher than 99.96% of all apps.)

The relationship between rank and volatility is pretty consistent across the App Store charts, while rank has a much greater impact on volatility at the lower end of Google Play charts (ranks 1-100 have a 35% correlation) than it does at the upper end (ranks 401-500 have a 1% correlation).

Study #3: App store rankings across the stars

The next study looks at the relationship between rank and star ratings to determine any trends that set the top chart apps apart from the rest and explore any ties to app store ranking volatility.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

As discussed in the introduction, this study relates directly to one of the factors commonly accepted as influential to app store rankings: average rating.

Getting started, I hypothesized that higher ranks generally correspond to higher ratings, cementing the role of star ratings in the ranking algorithm.

As far as volatility goes, I did not anticipate average rating to play a role in app store ranking volatility, as I saw no reason for higher rated apps to be less volatile than lower rated apps, or vice versa. Instead, I believed volatility to be tied to rating volume (as we’ll explore in our last study).

Results

Average App Store Ratings of Top Apps

The chart above plots the top 100 ranked apps in either store with their average rating (both historic and current, for App Store apps). If it looks a little chaotic, it’s just one indicator of the complexity of ranking algorithm in Google Play and the App Store.

If our hypothesis was correct, we’d see a downward trend in ratings. We’d expect to see the No. 1 ranked app with a significantly higher rating than the No. 100 ranked app. Yet, in neither store is this the case. Instead, we get a seemingly random plot with no obvious trends that jump off the chart.

A closer examination, in tandem with what we already know about the app stores, reveals two other interesting points:

  1. The average star rating of the top 100 apps is significantly higher than that of the average app. Across the top charts, the average rating of a top 100 Android app was 4.319 and the average top iOS app was 3.935. These ratings are 0.32 and 0.27 points, respectively, above the average rating of all rated apps in either store. The averages across apps in the 401–)500 ranks approximately split the difference between the ratings of the top ranked apps and the ratings of the average app.
  2. The rating distribution of top apps in Google Play was considerably more compact than the distribution of top iOS apps. The standard deviation of ratings in the Apple App Store top chart was over 2.5 times greater than that of the Google Play top chart, likely meaning that ratings are more heavily weighted in Google Play’s algorithm.

App Store Ranking Volatility and Average Rating

Looking next at the relationship between ratings and app store ranking volatility reveals a -15% correlation that is consistent across both app stores; meaning the higher an app is rated, the less its rank it likely to move in a 24-hour period. The exception to this rule is the Apple App Store’s calculation of an app’s current rating, for which I did not find a statistically significant correlation.

Study #4: App store rankings across versions

This next study looks at the relationship between the age of an app’s current version, its rank and its ranking volatility.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

In alteration of the above function, I’m using the age of a current app’s version as a proxy (albeit not a very good one) for trends in app store ratings and app quality over time.

Making the assumptions that (a) apps that are updated more frequently are of higher quality and (b) each new update inspires a new wave of installs and ratings, I’m hypothesizing that the older the age of an app’s current version, the lower it will be ranked and the less volatile its rank will be.

Results

How update frequency correlates with app store rank

The first and possibly most important finding is that apps across the top charts in both Google Play and the App Store are updated remarkably often as compared to the average app.

At the time of conducting the study, the current version of the average iOS app on the top chart was only 28 days old; the current version of the average Android app was 38 days old.

As hypothesized, the age of the current version is negatively correlated with the app’s rank, with a 13% correlation in Google Play and a 10% correlation in the App Store.

How update frequency correlates with app store ranking volatility

The next part of the study maps the age of the current app version to its app store ranking volatility, finding that recently updated Android apps have less volatile rankings (correlation: 8.7%) while recently updated iOS apps have more volatile rankings (correlation: -3%).

Study #5: App store rankings across monthly active users

In the final study, I wanted to examine the role of an app’s popularity on its ranking. In an ideal world, popularity would be measured by an app’s monthly active users (MAUs), but since few mobile app developers have released this information, I’ve settled for two publicly available proxies: Rating Count and Installs.

Hypothesis

Ranking = fn(Rating, Rating Count, Installs, Trends)

For the same reasons indicated in the second study, I anticipated that more popular apps (e.g., apps with more ratings and more installs) would be higher ranked and less volatile in rank. This, again, takes into consideration that it takes more of a shift to produce a noticeable impact in average rating or any of the other commonly accepted influencers of an app’s ranking.

Results

Apps with more ratings and reviews typically rank higher

The first finding leaps straight off of the chart above: Android apps have been rated more times than iOS apps, 15.8x more, in fact.

The average app in Google Play’s Top 100 had a whopping 3.1 million ratings while the average app in the Apple App Store’s Top 100 had 196,000 ratings. In contrast, apps in the 401–)500 ranks (still tremendously successful apps in the 99.96 percentile of all apps) tended to have between one-tenth (Android) and one-fifth (iOS) of the ratings count as that of those apps in the top 100 ranks.

Considering that almost two-thirds of apps don’t have a single rating, reaching rating counts this high is a huge feat, and a very strong indicator of the influence of rating count in the app store ranking algorithms.

To even out the playing field a bit and help us visualize any correlation between ratings and rankings (and to give more credit to the still-staggering 196k ratings for the average top ranked iOS app), I’ve applied a logarithmic scale to the chart above:

The relationship between app store ratings and rankings in the top 100 apps

From this chart, we can see a correlation between ratings and rankings, such that apps with more ratings tend to rank higher. This equates to a 29% correlation in the App Store and a 40% correlation in Google Play.

Apps with more ratings typically experience less app store ranking volatility

Next up, I looked at how ratings count influenced app store ranking volatility, finding that apps with more ratings had less volatile rankings in the Apple App Store (correlation: 17%). No conclusive evidence was found within the Top 100 Google Play apps.

Apps with more installs and active users tend to rank higher in the app stores

And last but not least, I looked at install counts as an additional proxy for MAUs. (Sadly, this is a statistic only listed in Google Play. so any resulting conclusions are applicable only to Android apps.)

Among the top 100 Android apps, this last study found that installs were heavily correlated with ranks (correlation: -35.5%), meaning that apps with more installs are likely to rank higher in Google Play. Android apps with more installs also tended to have less volatile app store rankings, with a correlation of -16.5%.

Unfortunately, these numbers are slightly skewed as Google Play only provides install counts in broad ranges (e.g., 500k–)1M). For each app, I took the low end of the range, meaning we can likely expect the correlation to be a little stronger since the low end was further away from the midpoint for apps with more installs.

Summary

To make a long post ever so slightly shorter, here are the nuts and bolts unearthed in these five mad science studies in app store optimization:

  1. Across the top charts, Apple App Store rankings are 4.45x more volatile than those of Google Play
  2. Rankings become increasingly volatile the lower an app is ranked. This is particularly true across the Apple App Store’s top charts.
  3. In both stores, higher ranked apps tend to have an app store ratings count that far exceeds that of the average app.
  4. Ratings appear to matter more to the Google Play algorithm, especially as the Apple App Store top charts experience a much wider ratings distribution than that of Google Play’s top charts.
  5. The higher an app is rated, the less volatile its rankings are.
  6. The 100 highest ranked apps in either store are updated much more frequently than the average app, and apps with older current versions are correlated with lower ratings.
  7. An app’s update frequency is negatively correlated with Google Play’s ranking volatility but positively correlated with ranking volatility in the App Store. This likely due to how Apple weighs an app’s most recent ratings and reviews.
  8. The highest ranked Google Play apps receive, on average, 15.8x more ratings than the highest ranked App Store apps.
  9. In both stores, apps that fall under the 401–500 ranks receive, on average, 10–20% of the rating volume seen by apps in the top 100.
  10. Rating volume and, by extension, installs or MAUs, is perhaps the best indicator of ranks, with a 29–40% correlation between the two.

Revisiting our first (albeit oversimplified) guess at the app stores’ ranking algorithm gives us this loosely defined function:

Ranking = fn(Rating, Rating Count, Installs, Trends)

I’d now re-write the function into a formula by weighing each of these four factors, where a, b, c, & d are unknown multipliers, or weights:

Ranking = (Rating * a) + (Rating Count * b) + (Installs * c) + (Trends * d)

These five studies on ASO shed a little more light on these multipliers, showing Rating Count to have the strongest correlation with rank, followed closely by Installs, in either app store.

It’s with the other two factors—rating and trends—that the two stores show the greatest discrepancy. I’d hazard a guess to say that the App Store prioritizes growth trends over ratings, given the importance it places on an app’s current version and the wide distribution of ratings across the top charts. Google Play, on the other hand, seems to favor ratings, with an unwritten rule that apps just about have to have at least four stars to make the top 100 ranks.

Thus, we conclude our mad science with this final glimpse into what it takes to make the top charts in either store:

Weight of factors in the Apple App Store ranking algorithm

Rating Count > Installs > Trends > Rating

Weight of factors in the Google Play ranking algorithm

Rating Count > Installs > Rating > Trends


Again, we’re oversimplifying for the sake of keeping this post to a mere 3,000 words, but additional factors including keyword density and in-app engagement statistics continue to be strong indicators of ranks. They simply lie outside the scope of these studies.

I hope you found this deep-dive both helpful and interesting. Moving forward, I also hope to see ASOs conducting the same experiments that have brought SEO to the center stage, and encourage you to enhance or refute these findings with your own ASO mad science experiments.

Please share your thoughts in the comments below, and let’s deconstruct the ranking formula together, one experiment at a time.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

We Removed a Major Website from Google Search, for Science!

Posted by Cyrus-Shepard

The folks at Groupon surprised us earlier this summer when they reported the
results of an experiment that showed that up to 60% of direct traffic is organic.

In order to accomplish this, Groupon de-indexed their site, effectively removing themselves from Google search results. That’s crazy talk!

Of course, we knew we had to try this ourselves.

We rolled up our sleeves and chose to de-index
Followerwonk, both for its consistent Google traffic and its good analytics setup—that way we could properly measure everything. We were also confident we could quickly bring the site back into Google’s results, which minimized the business risks.

(We discussed de-indexing our main site moz.com, but… no soup for you!)

We wanted to measure and test several things:

  1. How quickly will Google remove a site from its index?
  2. How much of our organic traffic is actually attributed as direct traffic?
  3. How quickly can you bring a site back into search results using the URL removal tool?

Here’s what happened.

How to completely remove a site from Google

The fastest, simplest, and most direct method to completely remove an entire site from Google search results is by using the
URL removal tool

We also understood, via statements by Google engineers, that using this method gave us the biggest chance of bringing the site back, with little risk. Other methods of de-indexing, such as using meta robots NOINDEX, might have taken weeks and caused recovery to take months.

CAUTION: Removing any URLs from a search index is potentially very dangerous, and should be taken very seriously. Do not try this at home; you will not pass go, and will not collect $ 200!

After submitting the request, Followerwonk URLs started
disappearing from Google search results in 2-3 hours

The information needs to propagate across different data centers across the globe, so the effect can be delayed in areas. In fact, for the entire duration of the test, organic Google traffic continued to trickle in and never dropped to zero.

The effect on direct vs. organic traffic

In the Groupon experiment, they found that when they lost organic traffic, they
actually lost a bunch of direct traffic as well. The Groupon conclusion was that a large amount of their direct traffic was actually organic—up to 60% on “long URLs”.

At first glance, the overall amount of direct traffic to Followerwonk didn’t change significantly, even when organic traffic dropped.

In fact, we could find no discrepancy in direct traffic outside the expected range.

I ran this by our contacts at Groupon, who said this wasn’t totally unexpected. You see, in their experiment they saw the biggest drop in direct traffic on
long URLs, defined as a URL that is at least as long enough to be in a subfolder, like https://followerwonk.com/bio/?q=content+marketer.

For Followerwonk, the vast majority of traffic goes to the homepage and a handful of other URLs. This means we didn’t have a statistically significant sample size of long URLs to judge the effect. For the long URLs we were able to measure, the results were nebulous. 

Conclusion: While we can’t confirm the Groupon results with our outcome, we can’t discount them either.

It’s quite likely that a portion of your organic traffic is attributed as direct. This is because of different browsers, operating systems and user privacy settings can potentially block referral information from reaching your website.

Bringing your site back from death

After waiting 2 hours,
we deleted the request. Within a few hours all traffic returned to normal. Whew!

Does Google need to recrawl the pages?

If the time period is short enough, and you used the URL removal tool, apparently not.

In the case of Followerwonk, Google removed over
300,000 URLs from its search results, and made them all reappear in mere hours. This suggests that the domain wasn’t completely removed from Google’s index, but only “masked” from appearing for a short period of time.

What about longer periods of de-indexation?

In both the Groupon and Followerwonk experiments, the sites were only de-indexed for a short period of time, and bounced back quickly.

We wanted to find out what would happen if you de-indexed a site for a longer period, like
two and a half days?

I couldn’t convince the team to remove any of our sites from Google search results for a few days, so I choose a smaller personal site that I often subject to merciless SEO experiments.

In this case, I de-indexed the site and didn’t remove the request until three days later. Even with this longer period, all URLs returned within just
a few hours of cancelling the URL removal request.

In the chart below, we revoked the URL removal request on Friday the 25th. The next two days were Saturday and Sunday, both lower traffic days.

Test #2: De-index a personal site for 3 days

Likely, the URLs were still in Google’s index, so we didn’t have to wait for them to be recrawled. 

Here’s another shot of organic traffic before and after the second experiment.

For longer removal periods, a few weeks for example, I speculate Google might drop these semi-permanently from the index and re-inclusion would comprise a much longer time period.

What we learned

  1. While a portion of your organic traffic may be attributed as direct (due to browsers, privacy settings, etc) in our case the effect on direct traffic was negligible.
  2. If you accidentally de-index your site using Google Webmaster Tools, in most cases you can quickly bring it back to life by deleting the request.
  3. Reinclusion happens quickly even after we removed a site for over 2 days. Longer than this, the result is unknown, and you could have problems getting all the pages of your site indexed again.

Further reading

Moz community member Adina Toma wrote an excellent YouMoz post on the re-inclusion process using the same technique, with some excellent tips for other, more extreme situations.

Big thanks to
Peter Bray for volunteering Followerwonk for testing. You are a brave man!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

The New Science of Web Psychology: Interview with Nathalie Nahai

Posted by Erica McGillivray

Nathalie NahaiWe all want to influence our customers and our clients to follow the path to conversion. But what if that path fails to draw them in? That’s where Nathalie Nahai, the web psychologist, comes into play. She helps nudge your audience toward the right path and make your goals in Google Analytics happy, not to mention your boss or clients.

Nathalie recently authored the new book Webs of Influence: The Psychology of Online Persuasion. We were so impressed with Nathalie that we invited her to speak at this year’s MozCon, July 8th-10th in Seattle. Get your ticket today because you don’t want to miss this:

Buy Your Ticket Today!

How’d you get your start working in inbound marketing as a psychologist?

I have a mixed background in psychology, the arts, and web design, and it wasn’t until I met some of the digi/tech entrepreneurs in East London that I even considered applying my psychology to online interaction. I became curious about how we’re influenced online and started looking for books on the subject. When I realised that there was a huge gap in the market, I decided to write the book myself. That was the real launching point.

Those of us working with data sometimes have to fight “common wisdom.” What web psychology optimization tip always shocks people?

I think the most obvious one is based around a comfortable assumption regarding website visitors, to which my response is always, “If you think you know your target audience, you’re wrong. Where’s your research?” No matter how well you think you know your audience, you should always research them, and never assume that the knowledge you have about them is carved in stone. People change — so must your strategy.

What’s your favorite social media medium to engage in?

I’d have to say Twitter, or Instagram when I’m travelling. Though recently there have been so many genuinely fascinating updates running through my Facebook feed, including my favourite, I Fucking Love Science, that a lot of my productivity has been lost to that particular black hole.

You recently wrote a post about why people troll online. How do you recommend dealing with trolls?

Honestly? I usually write a polite, reasoned response back, and if they retort with something obnoxious (which thankfully happens fairly rarely), then I ignore the thread. There’s no point fuelling the fire.

” …given that a great proportion of our communication is non-verbal [8], and that we rely heavily on facial recognition to connect with and understand one another, it may be that losing eye-contact online actually cuts out our main avenue for empathetic communication – without which we become emotionally disconnected and more predisposed towards hostile behaviour.”

Now for some fun stuff, what’s inspired you lately?

I went to an incredible gig by Susheela Raman, an extraordinary Tamil-London musician whose skill and smouldering charisma make for spellbinding, trance-inducing performances. I’ve loved her music for years, and every time I go to one of her shows, I end up on a high for days. If you ever get the chance to see her live, grab all your friends and go. She’ll blow your mind.

Susheela Raman performs “Kamakshi.”

Okay, since I know you’re a Trekkie (I’m one too), what was your favorite non-spoilery part of Star Trek Into Darkness?

I LOVED the new Star Trek!

My favourite bit was the tribble cameo. It was a cheeky nod to one of my favourite episodes, “The Trouble With Tribbles,” where someone sneaks a tribble onto the Enterprise and they multiply so fast they clog up the whole ship.

Thank you so much, Nathalie, for sharing a bit about web psychology, some beautiful music, and a couple types of geekiness with us. :)

If you’re interested in seeing more from Nathalie, she’ll be at this year’s MozCon, July 8th-10th, talking about “How Gender and Cultural Differences in Web Psychology Affect the Customer Experience.” You can also follow her on Twitter @TheWebPsych and read her book, Webs of Influence: The Psychology of Online Persuasion.

Buy Your MozCon ticket

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Posted in Latest NewsComments Off

How To Leverage the Science of Relationships to Gain True Influence

Image of Vintage Laboratory Equipment

If you define influence by the size of your Klout score, you can stop reading this right now.

If you believe influence is driven by the creation of a relationship between two parties, where one sees the other as truly knowledgeable about a particular product or service, then let’s talk about the science behind that influence.

Establishing influence is a multi-step process that moves the influenced through four key stages.

They move from awareness of the influencer, to knowing the influencer, to liking the influencer and finally finishing with preference for the influencer’s advice and counsel.

And, as an influencer, you’re going to earn your long-term living in that last stage of the relationship.

But you’re not going to get there by simply writing or talking about a particular subject matter. Instead, you need a strategic plan anchored in real science.

The law of propinquity

The law of propinquity states that the greater the physical (or psychological) proximity between people, the greater the chance that they will form friendships or romantic relationships.

The theory was first crafted by psychologists Leon Festinger, Stanley Schachter, and Kurt Back in what came to be called the Westgate studies conducted at MIT.

In the study, the strongest friendships developed between students who lived next to each other on the same floor, or between students who lived on different floors, if one of those students lived near the stairways.

In non-scientific terms, the Westgate Studies found that the frequency of contact between students was a strong indicator of future friendship formation.

The propinquity effect

There are two dimensions to propinquity, and they play different roles in marketing strategy.

There is physical propinquity and psychological propinquity. For the purposes of this article, let’s focus on psychological propinquity, as it most directly relates to creating influence through content creation.

Propinquity theory tells us that the more often people see your content, the better they get to know you. This makes sense. Each time someone is exposed to your content, they are interacting with you, your thoughts and beliefs. This leads to a feeling of knowing you, because it mirrors how we get to know people in the real world.

Repeated exposure to your content moves them from simply knowing you to actually liking you. Again, this mirrors the making friends context we’re all familiar with in the offline world.

The more we interact with people we know, the more we tend to like them — which has been repeatedly proven in numerous studies of romantic relationship formation.

Because they like you, they consume more of your content. As they do, a portion of the audience will find a common ground with your beliefs. This intersection of your beliefs, interests, or personality and your audience’s creates Psychological Propinquity. And that is what leads to preference and influence.

Know. Like. Trust.

An important note: studies also showed that being a jerk invalidates the propinquity effect. If research subjects didn’t like an initial interaction with a person, subsequent interactions didn’t lead the subjects to change their mind and begin liking the person.

Creating propinquity

Because of the power of propinquity to create influence, it’s not something you want to leave to chance.

Instead, strategically map out a propinquity platform and then fill that platform with high-quality content. The process of creating a propinquity platform is a bit too complex for a single post, but here are four steps that you can use to begin the process today.

  1. Catalog all the places your desired audience turns to for information — specifically information associated with the product or service you sell. If you’re paying attention to your audience’s world, this should be a fairly easy exercise and produce a list of obvious online and offline media, conference, and trade-show options.
  2. Begin finding those platforms that you’re not familiar with yet. Use a keyword generator tool to find the terms your audience uses to seek out relevant information. Then conduct searches on Google using those terms. Visit the sites you find on the first couple of pages and look for signs of active communities of readers.
  3. Listen to your desired audience on social media channels — Twitter makes this especially easy. Specifically, you’re looking for posts where they share a link. Create a list of sites they share, and look for correlations.
  4. Find relevant Twitter chats and participate in them. When the chat is over, scroll back through the chat and create a Twitter list of all the participants. Then follow that list for a few weeks — and again, look for tweets that contain links.

These last two are especially useful when you’re trying to create influence in a new industry where you don’t have extensive direct experience. Provided your target audience uses Twitter, these last two steps can help you quickly understand the key websites favored by your audience.

Your goal is to find online sites that your desired audience turns to for helpful information. Then determine if any of these sites will allow you to guest post or create content for their use.

By doing so, you will create multiple propinquity touches against your prospects. You’ll be the person “they see everywhere” and come to associate with category or product expertise.

The benefits of propinquity marketing

By mapping (then managing) your prospects’ progression through the various “Propinquity Points,” you can exponentially increase the frequency of your content impressions against a specific audience over a shorter time period.

This higher frequency of impressions — combined with the halo effect of your content appearing within already-trusted content channels — will more quickly move the audience through the propinquity process.

Do you have other ideas for creating a trusted propinquity platform? Let me know in the comments below …

About the Author: Tom Martin is a 20+ year veteran of the marketing and advertising industry with a penchant for stiff drinks, good debates and digital gadgets. He is the founder of Converse Digital and author of The Invisible Sale. Get more from Tom on Google+, Twitter, or LinkedIn.

Related Stories

Copyblogger

More Articles

Posted in Latest NewsComments Off

Online Journalism: eHow, Journatic & Narrative Science

Sharing is caring!

Please share :)

Embed code is here.

Online Journalism & Sausage Factories.

Categories: 

SEO Book.com

Posted in Latest NewsComments Off

Answers To Your Top 7 Questions From the Science of Inbound Marketing

Question! -bash-introductory3

Yesterday, HubSpot hosted a one-time-only webinar on the Science of Inbound Marketing, hosted by Dan Zarrella, HubSpot’s social media scientist, and VP of Marketing at Hootsuite Ben Watson. As you might have heard, we tried to break the world record for the largest webinar ever, but missed the mark. So for now, that record is still held by, well, HubSpot… but we gave ourselves a run for our money ;-)

Just because we didn’t break the record, though, doesn’t mean that the Science of Inbound Marketing wasn’t an epic success. And as always, you guys had some excellent questions to ask of our presenters, not all of which we could get to on Thursday. And since we’re not publishing the webinar or it’s slides, we’re more than pleased to answer a round of them here! Read on to get the answers to your top seven questions about the science of inbound marketing.

Answers to the Top 7 Questions From #InboundSci

1) Why do you consider email marketing to be inbound?

Inbound marketing is all about attracting people to your business by providing interesting, helpful, relevant content, instead of interrupting people who aren’t interested in your business by shoving your message down their throat. As such, email marketing, when approached in an inbound fashion, is one part of the entire inbound marketing landscape: SEO, social media, blogging, etc.

What do I mean by “inbound” email marketing? It means you’re only emailing people who have explicitly asked that you email them — you’ve done something to attract them to you so much so that they have asked to receive more communications from you. With an opt-in email list, it’s up to you as an email marketer to keep up that inbound approach by providing those contacts with the same kind of helpful, informative content that made them attracted to you in the first place. But if you start blasting an unsegmented email list with irrelevant content, whether they opted in or not, you’re no longer doing email marketing that fits into the inbound marketing approach.

2) How do you make a somewhat boring product or industry more interesting on social media?

First, it’s critical to remember that what you think is “boring” isn’t boring to leads who are looking for that exact solution! For example, changing your oil is kind of a boring topic to most people outside the auto industry … until they need to learn how to change their oil, that is.

That being said, it’s all about finding a relatable angle for your audience. If your social media content addresses your network’s pain points, it will be inherently interesting. If you haven’t defined personas yet, this will be extremely helpful in identifying those pain points so your content can center around them.

Social media is also a more laid back medium than, say, email. I mean, it’s right there in the name — “social” media. So don’t be afraid to be a little humorous! Use visual content instead of making people read! Heck, combine the two and do some memejacking! We’ve written an entire post on how to make “boring” content interesting if you’re looking for more ideas, but if you can center your social content around your audience’s pain points and lighten it up with a more colloquial tone, you’ll have a much more engaged social network.

3) Any thoughts on what makes people more likely to engage on social media?

Good question. Engagement is one of those fluffy marketing words that’s actually pretty important — if people don’t engage with your social content, you’ll have a mighty hard time growing your reach and getting that juicy lead-gen content out to the masses. But that doesn’t mean you should only post lead-gen content; that alone won’t get you social engagement. A mix of content is best — lead generation content, third-party content, news content, and particularly visual content for sites like Facebook, Google+, and Pinterest.

Dan also performed some research to determine not just what content types incite social shares, but what, on a higher level, motivates social media fans and followers to share. Take a look at what he found:

Science of Inbound Marketing - Why tweet?

So ask yourself — is the content you’re posting to social media novel, or is it just the same ol’ thing people are used to reading? Or perhaps you’re breaking some critical news; that would certainly play into your network’s desire to share relevant content. After all, everyone loves to be the first to break big news. Or perhaps you’re not unleashing anything groundbreaking via your social networks today, but you do have some thought leadership content in your arsenal. If you can make your network sound smart to their networks, you’re providing the type of content people will share.

4) What factors tend to influence an email’s click-through rate?

As Dan and Ben both highlighted in the webinar, the time of day that an email lands in a recipient’s inbox is a huge influence on your email’s click-through rate. While the best times to send emails will inevitably vary by audience, Dan’s data shows that sending an email out at 6 AM yielded the highest click-through rate.
The Science of Inbound Marketing - CTR time-of-dayThe lowest click-through rate, on the other hand, happened at 4 PM, just before the end of a 9-5 work day. People are likely paying less attention to their inboxes at this time, and are probably more focused on finishing the day’s tasks before going home.

5) If I send emails more frequently, can I get blacklisted as a spammer?

You certainly can, but getting blacklisted as a spammer isn’t typically dependent on just one behavior like increasing email frequency. The frequency with which you email should really be dependent on the size of your audience and your ability to appropriately segment them. You don’t need to include every contact in your database in every single email send you schedule — in fact, doing this will result in more unsubscribes and SPAM reports, because the content contained in the email message can’t possibly be relevant to such a broad audience. It’s that type of email marketing behavior that gets you blacklisted as a spammer. If you’re interested in increasing the frequency of your email marketing sends, it’s best to approach it scientifically; reference our blog post that tells you how to run an email sending frequency test.

6) How do audiences respond to command words in calls-to-action?

According to Dan’s data, pretty negatively, especially if that word is “submit.” In fact, using the word “submit” earned email senders a 14% conversion rate, while using a CTA without the word “submit” increased that conversion rate to 17%.

conversion rate by submit

When it comes to conversion rates, every percentage point matters. Dan extended his research to look at the number of clicks buttons received based on other copy, too. While “Submit” wasn’t the worst copy you could use, it’s greatly surpassed in performance by “Click Here” and “Go.”

clicks by button text

The moral of the story? People respond much better to certain command words in CTAs than others!

7) How can a company or brand increase blog readership?

Well, lots of things! But the key is to capitalize on each kind of traffic your blog receives — organic, direct, paid, email, social, etc. For example, do you know that email is an excellent source of traffic for your blog already? Perhaps you should put effort towards increasing blog subscribers, then, so your blog alert emails can help drive more traffic. Start putting blog subscribe buttons on more visible parts of your website, promoting your blog subscription in email marketing sends, and sharing it via social media.

Or perhaps you’ve seen that a lot of your blog traffic comes from organic search, but you haven’t really been putting much effort into SEO. That’s a big lever you can pull to increase blog readership if you’re already seeing some success with very little effort! Identify keywords you’d like to drive traffic for, and generate topics around them to start driving more organic traffic to your site.

Many companies also find success growing their blog readership by guest blogging more frequently. Not only does guest blogging — writing blog content for other websites — help you get access to a new audience that doesn’t know you yet, but the inbound links will help you with that SEO, too ;-)

What other lingering questions do you have on the Science of Inbound Marketing?

Image credit: Stefan Baudy

essential-guide-dark-cta

like-what-youaposve-read-click-here

mqlbanner_ima


HubSpot’s Inbound Internet Marketing Blog

Posted in Latest NewsComments Off

The Science Behind Website Redesign

website redesign

 

2/3 of marketers were happy with their last website redesign!

The problem?

Roughly 1/3 of website redesigns do not go well.

 

HubSpot CMO Mike Volpe recently delivered a content-heavy webinar, The Science of Website Redesign, based on his research. Once you learn it, you’ll very well be good to go. But not sure what website redesign consists of? Check out this condensed sum-up of when, who, why, what, and how FYIs; then, access the materials you won’t find anywhere else.

WHEN to redesign

Webinar: How often should you do a website redesign? When is the right time to redesign your website?

Quick FYI: It’s not about what people think. When you redesign depends on the data changes in your business. Whether you’re redesigning or not, make sure the content on your site is constantly evolving.

WHO does the redesign

Webinar: Who does the work on website redesign projects? Who initiates website redesign projects?

Quick FYI: You can either go through the process internally, or hire somebody else to do it. If you’re unsure you can do it, leverage an expert to improve your website. (If you are a small business, be careful going into the redesign process, or else you may fall into the portion of unsatisfied results.)

WHY redesign

Webinar: Why does your website exist? Why should you redesign it?

Quick FYI: It’s essential to establish a purpose. Can website  viewers access the details? Have a clear purpose for your website; have a clear goal for your redesign model.

 

simple web

WHAT to take into account

Webinar: How much should a website redesign cost? How long should a website redesign take? What should be focused on? What should my homepage look like? 

Quick FYI:

1. Cost- The amount of money spent is not the drive of success.

2. Time- Expect about four to five months. However, there is a 50/50 shot your anticipated launch will be late. (Only 49% of website redesign projects finish and launch on time.)

3. Users- Make sure your website can be simply navigated. Easy to use > flash, design, artwork. Invite users in and watch them use your site. (76% of consumers say the most important factor in a website’s design is “the website makes it easy for me to find what I want.”)

4. Successes- Look at the better designed sites and take cue from them.

5. Tracking- Measure stats to track your website’s effectiveness. Pay close attention to visitors, leads, sales, and conversion.

USE metrics

Webinar: What are the best metrics to use for my website? Are metrics important for a website redesign project?

Quick FYI: Just doing the redesign process does not guarantee results. You need to audit and read analytics. Use page graders to figure out which pages are more or less effective and look at the customer conversion rate. Work with such analytics to drive more conversions.

HOW to do a website redesign driven by science

Webinar: Preparation, process, pitfalls, proceeding, how to use HubSpot.

webredesigncheatsheet resized 600

New Webinar: The Science of Website Redesign

New Webinar: The Science of Website Redesign

Learn everything you need to know before you embark on the path of redesigning your site.

Register for this free webinar so you can construct your website in the context of a greater Internet marketing strategy.

Connect with HubSpot:

HubSpot on Twitter HubSpot on Facebook HubSpot on LinkedIn HubSpot on Google Buzz 

 


HubSpot’s Inbound Internet Marketing Blog

Posted in Latest NewsComments Off


Advert