Tag Archive | "Using"

How IBM is Using AI to Improve Hiring and to Retain and Retrain Employees

“We are the number one destination for Gen Z on Glassdoor,” says IBM CEO Ginni Rometty. “I get 8,000 resumes a day. I don’t make them go hunt for jobs. The AI talks to them and we ask very nicely and get permission, share this with me, share that with me, share this LinkedIn review with me, share this resume, and instead of you looking for jobs I’ll serve up jobs to you that actually match you. Our match rate of applying is 30 percent. With anybody else, it’s about nine percent.”

Ginni Rometty, CEO of IBM, discusses how they are using AI to improve hiring and to retain and retrain employees in an interview on CNBC:

AI Will Change 100 Percent of Jobs

The original genesis of this was a belief that AI will change 100 percent of jobs. But if you’re going to really get the benefit of it you have to change how the work is done. We chose to make HR, my HR leader chose to make HR, really the role model example of that. She has done a fantastic job putting AI in end to end. She tracks (the value of this AI approach) and we have now just from the AI alone, my HR function has saved $ 300 million from just doing that piece of it. In part, it helps the employees, because it completely makes HR employee centric. You don’t do things to people, you do it for them. It’s consumer-centric because of how we apply the AI. The other part of it is there’s productivity on the other side. Both are important right now.

Our experience has been and I’ll just use HR as an example. On the one hand, we were able to replace a lot of routine work. In the case of HR, our HR staffing went down by 30 percent. However, the people then doing the job of HR, they do far more non-routine work, their salaries all went up or their skills went up with it. You’re going to have this trade-off where technology will drive productivity but then it will also drive you and me to do our job different. It sits at that intersection.

Good for the Employee and Really Good for Business

This includes how we recruit today. We are the number one destination for Gen Z on Glassdoor. I get 8,000 resumes a day. I don’t make them go hunt for jobs. The AI talks to them and we ask very nicely and get permission, share this with me, share that with me, share this LinkedIn review with me, share this resume, and instead of you looking for jobs I’ll serve up jobs to you that actually match you. Our match rate of applying is 30 percent. With anybody else, it’s about nine percent.

It just shows this effectiveness for using the AI for things like a manager who says I’m doing salary. We do something to be sure salaries are fair, no unconscious biases that are in there, and then as well, proactive retention. That is the ability to use many pieces of data to say this person is likely to quit in the next six months, so do something now so that never enters their mind. We’re 95 percent accurate and have saved $ 300 million in replacement costs from that. These are both good for the employee and it’s really good for business.

We’ve Got to Make This Era of Technology More Inclusive

It’s not just driven by that (job demand driven by booming economy). I think you’ve got married here this idea that technology is going to change everyone’s job. It means reskilling of your current population. This is also so they’ve got the skills that apply for the future. I think this point of the word transparency, being clear with every employee, is their skill in the market hot or not so needed (based on) demand? Also, for your strategy, is it needed or won’t be needed for the future? We update that every quarter, that matrix, and we share it with employees. They know where they are and they say yes, I’ve got to move here and we use AI to help them move to a new area.

What’s happening in the market, whether or not there were IPOs, this would be happening anyways, this remake of skills. It means reskilling your current population. It means a strong belief that we’ve got to make this era of technology more inclusive. Six-year high schools where community colleges and high schools are combined together. We’ve been working with 500 other companies and with those schools and there’s a pipeline of 125,000 kids coming through. Now, 15 percent of our hiring was of less than 4-year college graduates. If you’re going to make this era inclusive, the technology is moving so fast, you’ve got to make it so more people can have a job in this world.

I just shared with the CHROs, one of the number one issues we see is we as employers over-spec the jobs that we go to hire for. We write down so many credentials they should have and it’s not true. If you’re your cyber analyst, which there’s going to be two million open jobs, let me tell you how many people can actually fill that that don’t have to have all those credentials. If I just talked about making this era for this country inclusive it’s that. It’s 15 percent and particularly the middle of the country is where we’ve done that hiring.

How IBM is Using AI to Improve Hiring and to Retain and Retrain Employees

The post How IBM is Using AI to Improve Hiring and to Retain and Retrain Employees appeared first on WebProNews.

WebProNews

Posted in Latest NewsComments Off

Using insights to find new audiences to test for awareness campaigns

Learn how to test new audience combinations with the Audience Insights tool in Google Ads Audience Manager.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

Using automation to boost PPC performance

Learn how automating bid management, scripts, error checking, reporting and ad copy can help marketers harness the power of automation to succeed in the fast-paced world of PPC.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

More Articles

Posted in Latest NewsComments Off

Using STAT for Content Strategy – Whiteboard Friday

Posted by DiTomaso

Search results are sophisticated enough to show searchers not only the content they want, but in the format they want it. Being able to identify searcher intent and interest based off of ranking results can be a powerful driver of content strategy. In this week’s Whiteboard Friday, we warmly welcome Dana DiTomaso as she describes her preferred tools and methods for developing a modern and effective content strategy.

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Hi, everyone. Welcome to Whiteboard Friday. My name is Dana DiTomaso. I’m President and partner of Kick Point, which is a digital marketing agency based way up in Edmonton, Alberta. Come visit sometime.

What I’m going to be talking about today is using STAT for content strategy. STAT, if you’re not familiar with STAT Search Analytics, which is in my opinion the best ranking tool on the market and Moz is not paying me to say that, although they did pay for STAT, so now STAT is part of the Moz family of products. I really like STAT. I’ve been using it for quite some time. They are also Canadian. That may or may not influence my decision.

But one of the things that STAT does really well is it doesn’t just show you where you’re ranking, but it breaks down what type of rankings and where you should be thinking about rankings. Typically I find, especially if you’ve been working in this field for a long time, you might think about rankings and you still have in your mind the 10 blue links that we used to have forever ago, and that’s so long gone. One of the things that’s useful about using STAT rankings is you can figure out stuff that you should be pursuing other than, say, the written word, and I think that that’s something really important again for marketers because a lot of us really enjoy reading stuff.

Consider all the ways searchers like to consume content

Maybe you’re watching this video. Maybe you’re reading the transcript. You might refer to the transcript later. A lot of us are readers. Not a lot of us are necessarily visual people, so sometimes we can forget stuff like video is really popular, or people really do prefer those places packs or whatever it might be. Thinking outside of yourself and thinking about how Google has decided to set up the search results can help you drive better content to your clients’ and your own websites.

The biggest thing that I find that comes of this is you’re really thinking about your audience a lot more because you do have to trust that Google maybe knows what it’s doing when it presents certain types of results to people. It knows the intent of the keyword, and therefore it’s presenting results that make sense for that intent. We can argue all day about whether or not answer boxes are awesome or terrible.

But from a visitor’s perspective and a searcher’s perspective, they like them. I think we need to just make sure that we’re understanding where they might be showing up, and if we’re playing by Google rules, people also ask is not necessarily going anywhere.

All that being said, how can we use ranking results to figure out our content strategy? The first thing about STAT, if you haven’t used STAT before, again check it out, it’s awesome.

Grouping keywords with Data Views

But one of the things that’s really nice is you can do this thing called data views. In data views, you can group together parts of keywords. So you can do something called smart tags and say, “I want to tag everything that has a specific location name together.”

Opportunities — where are you not showing up?

Let’s say, for example, that you’re working with a moving company and they are across Canada. So what I want to see here for opportunities are things like where I’m not ranking, where are there places box showing up that I am not in, or where are the people also ask showing up that I am not involved in. This is a nice way to keep an eye on your competitors.

Locations

Then we’ll also do locations. So we’ll say everything in Vancouver, group this together. Everything in Winnipeg, group this together. Everything in Edmonton and Calgary and Toronto, group all that stuff together.

Attributes (best, good, top, free, etc.)

Then the third thing can be attributes. This is stuff like best, good, top, free, cheap, all those different things that people use to describe your product, because those are definitely intent keywords, and often they will drive very different types of results than things you might consider as your head phrases.

So, for example, looking at “movers in Calgary” will drive a very different result than “top movers in Calgary.” In that case, you might get say a Yelp top 10 list. Or if you’re looking for “cheapest mover in Calgary,”again a different type of search result. So by grouping your keywords together by attributes, that can really help you as well determine how those types of keywords can be influenced by the type of search results that Google is putting out there.

Products / services

Then the last thing is products/services. So we’ll take each product and service and group it together. One of the nice things about STAT is you can do something called smart tags. So we can, say, figure out every keyword that has the word “best” in it and put it together. Then if we ever add more keywords later, that also have the word “best,”they automatically go into that keyword group. It’s really useful, especially if you are adding lots of keywords over time. I recommend starting by setting up some views that make sense.

You can just import everything your client is ranking for, and you can just take a look at the view of all these different keywords. But the problem is that there’s so much data, when you’re looking at that big set of keywords, that a lot of the useful stuff can really get lost in the noise. By segmenting it down to a really small level, you can start to understand that search for that specific type of term and how you fit in versus your competition.

A deep dive into SERP features

So put that stuff into STAT, give it a little while, let it collect some data, and then you get into the good stuff, which is the SERP features. I’m covering just a tiny little bit of what STAT does. Again, they didn’t pay me for this. But there’s lots of other stuff that goes on in here. My personal favorite part is the SERP features.

Which features are increasing/decreasing both overall and for you?

So what I like here is that in SERP features it will tell you which features are increasing and decreasing overall and then what features are increasing and decreasing for you.

This is actually from a real set for one of our clients. For them, what they’re seeing are big increases in places version 3, which is the three pack of places. Twitter box is increasing. I did not see that coming. Then AMP is increasing. So that says to me, okay, so I need to make sure that I’m thinking about places, and maybe this is a client who doesn’t necessarily have a lot of local offices.

Maybe it’s not someone you would think of as a local client. So why are there a lot more local properties popping up? Then you can dive in and say, “Okay, only show me the keywords that have places boxes.” Then you can look at that and decide: Is it something where we haven’t thought about local SEO before, but it’s something where searchers are thinking about local SEO? So Google is giving them three pack local boxes, and maybe we should start thinking about can we rank in that box, or is that something we care about.

Again, not necessarily content strategy, but certainly your SEO strategy. The next thing is Twitter box, and this is something where you think Twitter is dead. No one is using Twitter. It’s full of terrible people, and they tweet about politics all day. I never want to use it again, except maybe Google really wants to show more Twitter boxes. So again, looking at it and saying, “Is Twitter something where we need to start thinking about it from a content perspective? Do we need to start focusing our energies on Twitter?”

Maybe you abandoned it and now it’s back. You have to start thinking, “Does this matter for the keywords?” Then AMP. So this is something where AMP is really tricky obviously. There have been studies where it said, “I implemented AMP, and I lost 70% of my traffic and everything was terrible.” But if that’s the case, why would we necessarily be seeing more AMP show up in search results if it isn’t actually something that people find useful, particularly on mobile search?

Desktop vs mobile

One of the things actually that I didn’t mention in the tagging is definitely look at desktop versus mobile, because you are going to see really different feature sets between desktop and mobile for these different types of keywords. Mobile may have a completely different intent for a type of search. If you’re a restaurant, for example, people looking for reservations on a desktop might have different intent from I want a restaurant right now on mobile, for example, and you’re standing next to it and maybe you’re lost.

What kind of intent is behind the search results?

You really have to think about what that intent means for the type of search results that Google is going to present. So for AMP, then you have to look at it and say, “Well, is this newsworthy? Why is more AMP being shown?” Should we consider moving our news or blog or whatever you happen call it into AMP so that we can start to show up for these search results in mobile? Is that a thing that Google is presenting now?

We can get mad about AMP all day, but how about instead if we actually be there? I don’t want the comment section to turn into a whole AMP discussion, but I know there are obviously problems with AMP. But if it’s being shown in the search results that searchers who should be finding you are seeing and you’re not there, that’s definitely something you need to think about for your content strategy and thinking, “Is AMP something that we need to pursue? Do we have to have more newsy content versus evergreen content?”

Build your content strategy around what searchers are looking for

Maybe your content strategy is really focused on posts that could be relevant for years, when in reality your searchers are looking for stuff that’s relevant for them right now. So for example, things with movers, there’s some sort of mover scandal. There’s always a mover who ended up taking someone’s stuff and locking it up forever, and they never gave it back to them. There’s always a story like that in the news.

Maybe that’s why it’s AMP. Definitely investigate before you start to say, “AMP everything.” Maybe it was just like a really bad day for movers, for example. Then you can see the decreases. So the decrease here is organic, which is that traditional 10 blue links. So obviously this new stuff that’s coming in, like AMP, like Twitter, like places is displacing a lot of the organic results that used to be there before.

So instead you think, well, I can do organic all day, but if the results just aren’t there, then I could be limiting the amount of traffic I could be getting to my website. Videos, for example, now it was really interesting for this particular client that videos is a decreasing SERP for them, because videos is actually a big part of their content strategy. So if we see that videos are decreasing, then we can take a step back and say, “Is it decreasing in the keywords that we care about? Why is it decreasing? Do we think this is a test or a longer-term trend?”

Historical data

What’s nice about STAT is you can say “I want to see results for the last 7 days, 30 days, or 60 days.” Once you get a year of data in there, you can look at the whole year and look at that trend and see is it something where we have to maybe rethink our video strategy? Maybe people don’t like video for these phrases. Again, you could say, “But people do like video for these phrases.” But Google, again, has access to more data than you do.

If Google has decided that for these search phrases video is not a thing they want to show anymore, then maybe people don’t care about video the way that you thought they did. Sorry. So that could be something where you’re thinking, well, maybe we need to change the type of content we create. Then the last one is carousel that showed up for this particular client. Carousel, there are ones where they show lots of different results.

I’m glad that’s dropping because that actually kind of sucks. It’s really hard to show up well there. So I think that’s something to think about in the carousel as well. Maybe we’re pleased that that’s going away and then we don’t have to fight it as much anymore. Then what you can see in the bottom half are what we call share of voice.

Share of voice

Share of voice is calculated based on your ranking and all of your competitors’ ranking and the number of clicks that you’re expected to get based on your ranking position.

So the number 1 position obviously gets more ranks than the number 100 position. So the share of voice is a percentage calculated based on how many of these types of items, types of SERP features that you own versus your competitors as well as your position in these SERP features. So what I’m looking at here is share of voice and looking at organic, places, answers, and people also ask, for example.

So what STAT will show you is the percentage of organic, and it’s still, for this client — and obviously this is not an accurate chart, but this is vaguely accurate to what I saw in STAT — organic is still a big, beefy part of this client’s search results. So let’s not panic that it’s decreasing. This is really where this context can come in. But then you can think, all right, so we know that we are doing “eeh” on organic.

Is it something where we think that we can gain more? So the green shows you your percentage that you own of this, and then the black is everyone else. Thinking realistically, you obviously cannot own 100% of all the search results all the time because Google wouldn’t allow that. So instead thinking, what’s a realistic thing? Are we topping out at the point now where we’re going to have diminishing returns if we keep pushing on this?

Identify whether your content efforts support what you’re seeing in STAT

Are we happy with how we’re doing here? Maybe we need to turn our attention to something else, like answers for example. This particular client does really well on places. They own a lot of it. So for places, it’s maintain, watch, don’t worry about it that much anymore. Then that can drop off when we’re thinking about content. We don’t necessarily need to keep writing blog post for things that are going to help us to rank in the places pack because it’s not something that’s going to influence that ranking any further.

We’re already doing really well. But instead we can look at answers and people also ask, which for this particular client they’re not doing that well. It is something that’s there, and it is something that it may not be one of the top increases, but it’s certainly an increase for this particular client. So what we’re looking at is saying, “Well, you have all these great blog posts, but they’re not really written with people also ask or answers in mind. So how about we go back and rewrite the stuff so that we can get more of these answer boxes?”

That can be the foundation of that content strategy. When you put your keywords into STAT and look at your specific keyword set, really look at the SERP features and determine what does this mean for me and the type of content I need to create, whether it’s more images for example. Some clients, when you’re looking at e-commerce sites, some of the results are really image heavy, or they can be product shopping or whatever it might be.

There are really specific different features, and I’ve only shown a tiny subset. STAT captures all of the different types of SERP features. So you can definitely look at anything if it’s specific to your industry. If it’s a feature, they’ve got it in here. So definitely take a look and see where are these opportunities. Remember, you can’t have a 100% share of voice because other people are just going to show up there.

You just want to make sure that you’re better than everybody else. Thanks.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

Related Articles

Posted in Latest NewsComments Off

How LinkedIn is Using Machine Learning to Determine Skills

One of the more interesting reveals that Dan Francis, Senior Product Manager for LinkedIn Talent Insights, provided in a recent talk about the Talent Insights tool is how LinkedIn is using machine learning to determine skills of people. He says that there are now over 575 million members in the LinkedIn database and there are 35,000 standardized skills in LinkedIn’s skills taxonomy. The way LinkedIn is figuring out what skills a member has is via machine learning technology.

Dan Francis, Senior Product Manager, LinkedIn Talent Insights, discussed Talent Insights in a recent LinkedIn video embedded below:

LinkedIn Using Machine Learning to Determine Skills

The skills data in Talent Insights comes from a variety of sources, mainly from a member’s profile. There are over 35,000 standardized skills that we have in LinkedIn’s skills taxonomy, and the way we’re figuring out what skills a member has is using machine learning. We can identify skills that a member has that’s based on things that they explicitly added to their profile.

The other thing that we’ll do is look at the text of the profile. There’s a field of machine learning called natural language processing and we’re basically using that. It’s scanning through all the words that are on a member’s profile, and when we can determine that it’s pertaining to the member, as oppose the company or another subject, we’ll say okay, we think that this member has this skill. We also look at other attributes, like their title or the company, to make sure they actually are very likely to have that skill.

The last thing that we’ll do is look at the skills a member has and figure out what are skill relationships. So as an example, let’s say that a member has Ember, which is a type of JavaScript framework, since we know that they know Ember, they also know JavaScript. So if somebody’s running a search like that, we’ll surface them in the results. I think that the most important reason why this is helpful and the real benefit to users of the platform is when you’re searching, you want to get as accurate a view of the population as possible. What we’re trying to do is look at all the different signals that we possibly have to represent that view.  

575 Million People on LinkedIn Globally and Adding 2 Per Second

Today, LinkedIn has over 575 million members that are on the platform globally. This is actually growing at a pretty rapid clip, so we’re adding about two members per second. One of the great things about LinkedIn is that we’re actually very well represented in terms of the professional workforce globally. If you look at the top 30 economies around the world, we actually have the majority of professionals in all of those economies.

LinkedIn is the World’s Largest Aggregator of Jobs

I think there’s often a perception that most of the data’s directly from LinkedIn, stuff that’s posted on LinkedIn and job status is one notable exception to that. Plenty of companies and people will post jobs on LinkedIn, and that’s information that does get surfaced. However, we’re also the world’s largest aggregator of jobs. At this point there are over 20 million jobs that are on LinkedIn.

The way that we’re getting that information is we’re working with over 40,000 partners. These are job boards, ATS’s, and direct customer relationships. We’re collecting all of those jobs, standardizing them, and showing them on our platform. The benefit is not just for displaying the data in Talent Insights, the benefit is also when members are searching on LinkedIn.com, we’re giving them as representative a view of the job market as possible.

The post How LinkedIn is Using Machine Learning to Determine Skills appeared first on WebProNews.

WebProNews

Posted in Latest NewsComments Off

Sparrho CEO: Using Augmented Intelligence to Build Trust in Brands

Many companies are working to build authentic and trusted brands with consumers. This is especially true with pharmaceuticals, biotech, and med-tech companies. The CEO of Sparrho, Dr. Vivian Chan, says that their approach combines artificial intelligence and 400,000 Ph.D.’s to deliver scientific data to companies. This data helps companies back up their marketing messages which enables them to more effectively build that vital trust with their customers.

Dr. Vivian Chan, Sparrho CEO, recently discussed on CNBC their unique hybrid AI approach to helping companies use science and information to back up their brands messaging:

AI Enables Humans to Make Better-Informed Decisions

Artificial intelligence is really about algorithms and how we can use data that we collect to enable humans to make better-informed decisions. I not at all about having computers make decisions on behalf of humans. In a way, I think it’s machines that will be helping evolve the tasks and not actually replacing the human roles. Human roles themselves will be evolving also as the technology improves. This allows humans to have more headspace to be thinking about things that machines can’t do right now.

Machines can’t necessarily summarize a lot of pieces of contextual analysis very well yet to a 100 percent accuracy and humans are still better at making nonlinear connection points. For example, being able to say that this mathematical equation is super relevant to an agricultural problem. If we don’t have the tagging and reference and citations humans are still better at making those nonlinear new connection points than machines.

Humans are still good at coming up with the questions. If you actually pose the right question and you train the data and the algorithms you might actually get the right answer. However, you still need to have the humans to be thinking about what the questions are in order to ultimately get the answers.

It’s About Using AI as a Means to an End

I  think the angle is really thinking about using AI as a means to an end and not just the end. Ultimately, this is a hybrid approach and various different people are calling it differently. Even MIT professors are calling it a hybrid approach. We’re calling it augmented intelligence. We need to come up with a good relationship between humans and machines. Marketing is about building relationships. It’s about building relationships between brands and consumers and now how do we build that relationship digitally?

Using Science to Build an Authenticated Brand

In this digital age, consumers are a lot more tech savvy but are also information savvy. They want to know what the is science behind certain things. Even if you’re talking about CPG, consumer packaged goods, what is the science behind a shampoo product right now when it claims 98 percent prevention of hair loss? What is the real science behind that and how do we actually bring that simplified science-oriented message to the consumer? How can consumers educate themselves and make informed decisions based on the products and thereby build a stronger brand relationship?

Ultimately what we’re trying to do at Sparrow is simplify science to build trust in brands. Especially for marketing departments and brands, it’s really allowing them to have the evidence-based science and the facts because building a very authenticated brand is what is meaningful to consumers. Research says that about 71 percent of consumers immediately reject content that looks like a sales pitch. Building a relationship and having an authenticated brand and content is super important in building that relationship between brand and consumers.

Sparrho Provides Content as a Service On Demand

We’re going even wider with that by providing what we call content as a service or relevant content on demand. We then integrate that into the digital platforms or the brands. We have what we call augmented intelligence with over 16 million pieces of content that is augmented by a network of more than 400,000 monthly active PhDs in a150 countries. They curate and summarize what’s actually happening in the latest of science.

We know that in about 60 percent of pharmaceuticals, biotech, and even med-tech companies, are spending more than $ 50 million per year just in content. Content has been the major driver for a lot of their marketing. In pharmaceuticals, they’re trying to really bring that relationship that they have offline to online. It’s at the heart of this digital transformation age that we are going through. This is really helping bring that relationship online by using the right engaging content. Our goal with Sparrow is to drive more engagement and ultimately more sales.

The post Sparrho CEO: Using Augmented Intelligence to Build Trust in Brands appeared first on WebProNews.

WebProNews

Related Articles

Posted in Latest NewsComments Off

How to Stop Drowning in Data and Begin Using Your Metrics Wisely

Digital marketers have a problem: We’ve got too much data. It sounds like a ridiculous complaint coming from a data…

The post How to Stop Drowning in Data and Begin Using Your Metrics Wisely appeared first on Copyblogger.


Copyblogger

Posted in Latest NewsComments Off

Using a New Correlation Model to Predict Future Rankings with Page Authority

Posted by rjonesx.

Correlation studies have been a staple of the search engine optimization community for many years. Each time a new study is released, a chorus of naysayers seem to come magically out of the woodwork to remind us of the one thing they remember from high school statistics — that “correlation doesn’t mean causation.” They are, of course, right in their protestations and, to their credit, and unfortunate number of times it seems that those conducting the correlation studies have forgotten this simple aphorism.

We collect a search result. We then order the results based on different metrics like the number of links. Finally, we compare the orders of the original search results with those produced by the different metrics. The closer they are, the higher the correlation between the two.

That being said, correlation studies are not altogether fruitless simply because they don’t necessarily uncover causal relationships (ie: actual ranking factors). What correlation studies discover or confirm are correlates.

Correlates are simply measurements that share some relationship with the independent variable (in this case, the order of search results on a page). For example, we know that backlink counts are correlates of rank order. We also know that social shares are correlates of rank order.

Correlation studies also provide us with direction of the relationship. For example, ice cream sales are positive correlates with temperature and winter jackets are negative correlates with temperature — that is to say, when the temperature goes up, ice cream sales go up but winter jacket sales go down.

Finally, correlation studies can help us rule out proposed ranking factors. This is often overlooked, but it is an incredibly important part of correlation studies. Research that provides a negative result is often just as valuable as research that yields a positive result. We’ve been able to rule out many types of potential factors — like keyword density and the meta keywords tag — using correlation studies.

Unfortunately, the value of correlation studies tends to end there. In particular, we still want to know whether a correlate causes the rankings or is spurious. Spurious is just a fancy sounding word for “false” or “fake.” A good example of a spurious relationship would be that ice cream sales cause an increase in drownings. In reality, the heat of the summer increases both ice cream sales and people who go for a swim. That swimming can cause drownings. So while ice cream sales is a correlate of drowning, it is *spurious.* It does not cause the drowning.

How might we go about teasing out the difference between causal and spurious relationships? One thing we know is that a cause happens before its effect, which means that a causal variable should predict a future change.

An alternative model for correlation studies

I propose an alternate methodology for conducting correlation studies. Rather than measure the correlation between a factor (like links or shares) and a SERP, we can measure the correlation between a factor and changes in the SERP over time.

The process works like this:

  1. Collect a SERP on day 1
  2. Collect the link counts for each of the URLs in that SERP
  3. Look for any URLs are out of order with respect to links; for example, if position 2 has fewer links than position 3
  4. Record that anomaly
  5. Collect the same SERP in 14 days
  6. Record if the anomaly has been corrected (ie: position 3 now out-ranks position 2)
  7. Repeat across ten thousand keywords and test a variety of factors (backlinks, social shares, etc.)

So what are the benefits of this methodology? By looking at change over time, we can see whether the ranking factor (correlate) is a leading or lagging feature. A lagging feature can automatically be ruled out as causal. A leading factor has the potential to be a causal factor.

We collect a search result. We record where the search result differs from the expected predictions of a particular variable (like links or social shares). We then collect the same search result 2 weeks later to see if the search engine has corrected the out-of-order results.

Following this methodology, we tested 3 different common correlates produced by ranking factors studies: Facebook shares, number of root linking domains, and Page Authority. The first step involved collecting 10,000 SERPs from randomly selected keywords in our Keyword Explorer corpus. We then recorded Facebook Shares, Root Linking Domains, and Page Authority for every URL. We noted every example where 2 adjacent URLs (like positions 2 and 3 or 7 and 8) were flipped with respect to the expected order predicted by the correlating factor. For example, if the #2 position had 30 shares while the #3 position had 50 shares, we noted that pair. Finally, 2 weeks later, we captured the same SERPs and identified the percent of times that Google rearranged the pair of URLs to match the expected correlation. We also randomly selected pairs of URLs to get a baseline percent likelihood that any 2 adjacent URLs would switch positions. Here were the results…

The outcome

It’s important to note that it is incredibly rare to expect a leading factor to show up strongly in an analysis like this. While the experimental method is sound, it’s not as simple as a factor predicting future — it assumes that in some cases we will know about a factor before Google does. The underlying assumption is that in some cases we have seen a ranking factor (like an increase in links or social shares) before Googlebot has and that in the 2 week period, Google will catch up and correct the incorrectly ordered results. As you can expect, this is a rare occasion. However, with a sufficient number of observations, we should be able to see a statistically significant difference between lagging and leading results. However, the methodology only detects when a factor is both leading and Moz Link Explorer discovered the relevant factor before Google.

Factor Percent Corrected P-Value 95% Min 95% Max
Control 18.93% 0
Facebook Shares Controlled for PA 18.31% 0.00001 -0.6849 -0.5551
Root Linking Domains 20.58% 0.00001 0.016268 0.016732
Page Authority 20.98% 0.00001 0.026202 0.026398

Control:

In order to create a control, we randomly selected adjacent URL pairs in the first SERP collection and determined the likelihood that the second will outrank the first in the final SERP collection. Approximately 18.93% of the time the worse ranking URL would overtake the better ranking URL. By setting this control, we can determine if any of the potential correlates are leading factors – that is to say that they are potential causes of improved rankings.

Facebook Shares:

Facebook Shares performed the worst of the three tested variables. Facebook Shares actually performed worse than random (18.31% vs 18.93%), meaning that randomly selected pairs would be more likely to switch than those where shares of the second were higher than the first. This is not altogether surprising as it is the general industry consensus that social signals are lagging factors — that is to say the traffic from higher rankings drives higher social shares, not social shares drive higher rankings. Subsequently, we would expect to see the ranking change first before we would see the increase in social shares.

RLDs

Raw root linking domain counts performed substantially better than shares at ~20.5%. As I indicated before, this type of analysis is incredibly subtle because it only detects when a factor is both leading and Moz Link Explorer discovered the relevant factor before Google. Nevertheless, this result was statistically significant with a P value <0.0001 and a 95% confidence interval that RLDs will predict future ranking changes around 1.5% greater than random.

Page Authority

By far, the highest performing factor was Page Authority. At 21.5%, PA correctly predicted changes in SERPs 2.6% better than random. This is a strong indication of a leading factor, greatly outperforming social shares and outperforming the best predictive raw metric, root linking domains.This is not unsurprising. Page Authority is built to predict rankings, so we should expect that it would outperform raw metrics in identifying when a shift in rankings might occur. Now, this is not to say that Google uses Moz Page Authority to rank sites, but rather that Moz Page Authority is a relatively good approximation of whatever link metrics Google is using to determine ranking sites.

Concluding thoughts

There are so many different experimental designs we can use to help improve our research industry-wide, and this is just one of the methods that can help us tease out the differences between causal ranking factors and lagging correlates. Experimental design does not need to be elaborate and the statistics to determine reliability do not need to be cutting edge. While machine learning offers much promise for improving our predictive models, simple statistics can do the trick when we’re establishing the fundamentals.

Now, get out there and do some great research!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Moz Blog

More Articles

Posted in Latest NewsComments Off

Using Empathy and Connection to Craft More Powerful Content

I recently heard our friend Joanna Wiebe say something that blew my mind. I didn’t get it down word for…

The post Using Empathy and Connection to Craft More Powerful Content appeared first on Copyblogger.


Copyblogger

Posted in Latest NewsComments Off

Augmented reality artist creates sculptures using Bing search

Using the Bing Search API and a proprietary AR platform, an artist created sculptures composed entirely of dynamic search images customized in real time.



Please visit Search Engine Land for the full article.


Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Posted in Latest NewsComments Off

Advert