Hey gang, this is my daily Search Engine Optimization (SEO) thought.
We [all] know that Google uses machine learning to filter out the good, the bad, and the ugly websites. It's based on human raters, which are used to feed a model as input to the system.
So my question is – do we think there's a similar system that counts and uses Social Media reactions?
Is there a lucrative predictive model that can be used to assign a popularity score based on likes, shares, tweets, pins, and comments?
Is it correlation or causation, after all?
Human raters are quality control. That's why they are called QUALITY raters. They Rate the Quality of the algorithm by rating the websites that the algorithms present to them.
They are not "used to feed a model as input to the system."
In certain circumstances human raters are used to form a yardstick (a baseline) for a new algorithm to be tested against.
But when we talk about Quality Raters, what's being discussed is just plain old Quality Control.
More information, plus citations to scientific research here. The links to research is to show you that I'm not just pulling opinions out of the air. The links are to show you that the statements have validity as they are based on scientific research conducted by Google and/or other scientists.
Google Rankings Dropped? How to Evaluate with Raters Guidelines – Search Engine Journal
"In certain circumstances human raters are used to form a yardstick (a baseline) for a new algorithm to be tested against." Doesn't this mean they are using humans to train a machine?
Here is what Bill also said a few years back: "Take Always
The patent provides more details on how human ratings and signals from a website might be used to create a model for quality rating that can help determine how pages are ranked in search results, using a machine learning approach to generate ratings for pages based upon the sample set of pages actually rated."
How Google May Rank Web Sites Based on Quality Ratings
Yup, that's machine learning as I described. If you check the article I linked to, I link to another example of how that's done. This is commonly done with tasks such as ranking images.
Giving the machines a baseline of what a human rater rated as quality then having the machine try to predict what the human would have rated is the task I described but that's just one small area.
It's like at the beginning, to generate data for the machine to learn from, including when Raters rate things differently.
The majority of what the human quality raters are doing is quality control. Danny Sullivan also explained that recently in a series of his SearchLiason tweets.
But why would Google still need humans if they now have trained machines to decide which pages are good, bad or ugly? It's because those ratings eventually end up into the model anyway, no? There are billions of pages that need to be assessed and they do it with a machine taught by people.
"Is there a lucrative predictive model that can be used to assign a popularity score based on likes, shares, tweets, pins, and comments?"
Not that I'm aware of.
Six or so years ago Moz published a correlation study about FB Likes and rankings. That hypothesis has been discredited. Danny Sullivan of SEL at the time was at SMX trying to tamp that down, as were Googlers.
As a sidenote, there was NEVER any proof in the form of research or patents. Correlation studies are meaningless in the absence of research or patents that demonstrate that something is possible.
The idea of social media as a ranking signal still keeps popping up as a hypothesis, and always without any solid foundation in the form of patents or research.
Three years ago there was a research published that explored ways to assess web page credibility by using external signals like links and social media.
Here's a link to the research if you want to understand what's real and what's not.
http://www.www2015.it/doc…/proceedings/companion/pfiltered from 1441.pdf
I think just because Google was granted a patent doesn't mean they use it. Also – just because there is no patent – doesn't mean they don't have the invention in the algorithm. And last – I think if we set to find a patent that touches using socialsignals – we will, but again – my question is theoretical – is it possible to use machine learning and socialsignals to decide if a page is really popular or not.
"my question is theoretical"
Actually, it's hypothetical. 😉
That's one of the points I'm making.
Loosely speaking, theoretical is when there is some kind of basis for the idea. Hypothetical is when there is no basis.
You are entitled to your opinion and hypothesis. I'm just adding my informed opinion to help you out. Not arguing.
Actually I'm also not here to argue. And my English is sometimes poor. This post was about machine learning. And whether we think G uses it for a megaton of other things or just page quality. I was also thinking: advertorials, unnatural anchor text, etc.
Because I could see how social engagement might be a ranking signal. And it might just so happen that machine learning could extract those signals that make all this feasible. Just saying.
My understanding, and I really need to dig up a Google statement on this, is that socialsignals can be used as a validating signal, to validate their opinion of the site, kind of like a letter of recommendation, but not a ranking signal in itself.
Also, I think the SEO industry focuses too much on ranking signals. There are things going on outside of the ranking engine that ultimately can play a role in how the SERPs are arranged. And ML is particularly active there. Check out what the Modification Engine is.
ML can create models for estimating user satisfaction and pull a site with poor ranking signals and promote it to the top of the Search Engine Result Pages (SERPs). I think that's a Microsoft research.
These may satisfy you:
» Thoughts on “EAT is Not in SEO”| Expertise, Authoritativeness, Trustworthiness (EAT)
» This Pretty Pinterest Expert opens Pinterest Courses within her website
» Fuels and Prime (Minerals | Goods) Need Social Signals Rather Than SEO