WordPress Turns off the Taps on WPengine

SEO News Topics – Canonical Chronicle 83

In this weeks canonical Chronicle we discuss:

  • WordPress Banning WP Engine
  • Google Removes the Cache
  • Preferred source labels in the SERPs
  • Update to Google’s Spam policy
  • Post Core Update Traffic Losses

Get the companion deck for free

Subscribe to our email newsletter and view back issues here.

WordPress bans WPengine

Last week tens of thousands of WPengine users were unable to make any updates to their websites as WordPress blocked access to their servers which contained themes, plugins and a host of other things webmasters need to run their WordPress site.

This is because of an ongoing legal battle between Matt Mullenweg and WPengine. The debacle started when Matt Mullenweg, publicly expressed his distaste (quite rightly) for the WPengine business model of taking free resources and reselling them for a premium without reinvesting in open source.

This caused WPengine to go legal and send a cease and desist threatening Matt to be quiet or they would sue him.

WordPress then blocked access to all the free resource that WPengine had been using to power their $500m business and issued their own law suit, this time for a trademark violation.

The official statement from WordPress, was a zinger! They said,

“If WP Engine wants to control your WordPress experience, they need to run their own user login system, update servers, plugin directory, theme directory, pattern directory, block directory, translations, photo directory, job board, meetups, conferences, bug tracker, forums, Slack, Ping-o-matic, and showcase. Their servers can no longer access our servers for free.”

This will effect tens of thousands of websites, including this SEO agency website which is hosted with WPengine. Despite the inconvenience, I am really happy to see Matt taking the action against WPengine. Open source eco systems are a gift to humanity and should be protected at all costs. If you take from open source projects, make a handsome sum of money and don’t give back to the community, that is parasitical behaviour and not in the spirit of open source.

If you have some time, please read Matt’s blog post called Eco System thinking.

What to do if you have been effected by the WPengine issue?

So, if you have been affected what can you do?

Why not do what Type A are doing and move your hosting to WordPress.com. There is an easy to use migration option and you can use a plugin like WP All Import to move your content easily. Do what we are doing and simply move the hosting to WordPress.com. Not only is it cheaper, the money is reinvested into open source and open source moves humanity forward.

Since the story broke Matt has extended the time to the 1st October to allow webmasters to update their websites and move them away to another host.

Google Cache Search Operator will be retired

If there is one thing Google loves more than killing our traffic with core updates, it’s killing their own products.

Alongside authorship markup, keywords meta tags and Google Plus, another feature has been added to the Google graveyard.

This time it’s the Cache: search operator

For those who don’t know, Google has commands called “advanced operators” that you can use to get specific data back from the search engine. For example if you type “site:domain.com” you will get all the links from that domain. Until this week you could also type “cache:domain.com” which would show you the rendered and cached version of a page that Google has stored. However, this feature has been buggy for a while and is not used by most SEOs due it its unreliability.

Last week, we reported on archive.org now being featured in the SERPs when you click on “more about this result”. This is the best option for checking cached and historic versions of URLs that you do not own. However, for checking the index status of your own website, we recommend using the Live Index Test (formerly fetch and render) inside of search console.

Not only will it render the page on demand, it will also show the downloaded and rendered HTML as well as the index status.

Google Updates Spam Policies and then Forbes and other see a decline

Google have made a small but significant update to their spam policy documentation around “site reputation abuse”.

This is a direct lift from the updated document:

“Close oversight or involvement is when the first-party hosting site is directly producing or generating unique content (for example, via staff directly employed by the first-party, or freelancers working for staff of the first-party site). It is not working with third-party services (such as “white-label” or “turnkey”) that focus on redistributing content with the primary purpose of manipulating search rankings.”

What is site reputation abuse?

Site reputation abuse is when a large domain like CNN, USA Today, Forbes, the Telegraph or hundreds of other high domain rating websites, abuse the fact that they have a powerful domain and publish affiliate content that has nothing to do with their core business. For example, Forbes is a business news magazine, however they have a section of their site called “Forbes Advisor” which reviews things from CBD gummies to Pet Insurance, something they have zero Experience or Expertise in but due to their domain being so powerful, they have plenty of Authority and Trust (EEAT) so they can easily rank, profit and pollute the SERP with generic content written by 3rd party staff writers.

SEOs have started to notice a connection

Lars Lofgren, co Founder of StonePress, wrote a a blog post and a subsequent follow up on his personal blog about these big sites and their site reputation abuse. He uncovers what appears to be Forbes, CNN and USAToday using the same 3rd party content company to write affiliate content on their main domain.

Interestingly, this is the exact definition of site reputation abuse that Google have added to their spam policy documentation last week.

But wait, there’s more

A few days after Google updated their policy, Glen Gabe took to Twitter to report a major decline in search traffic for…….drumrolll please……Forbes.

Interestingly, it seems to be mostly from their /health/ subfolder that contains the affiliate content about CBD gummies.

No one likes to see another company lose traffic but it is nice to see Google living up to their statements and making these corrections.

Search Engine Round Table has images of exactly what’s been updated for those who want to do some more digging

Preferred Source Labels in the SERPs

This week, SEOs have reported on Twitter that they are seeing new labels on certain domains saying “preferred source”.

We personally cannot replicated it in the UK but it appears to be one of many SERP tests that Google is carrying out. The news that Google is doing a UX test is not, in itself, very interesting. However, the reasons it chooses some domains as a “preferred source” and others as not is very interesting indeed as it gives us an insight into how they think about trusted domains.

Speculation about what a “preferred source” might be

Remember that when Google’s API documentation was leaked we learned that Google have a trusted seed set of domains they start crawling from. These domains are likely to be domains like Wikipedia, BBC and, hilariously, Forbes.

As they follow links from site to site, the number of hops away from the trusted source will dictate how they view you. So if you have links from these trusted seed sites, it’s likely you will be seen as a “preferred source”

So the big question is, how do you become a trusted source? Very simply links and mentions that are close to Google’s trusted seed list. Unlinked mentions in databases that help Google qualify you as a trusted entity will help. For a full reference list of these databases, put your favourite brand into Wikidata and scroll down to the “identifiers”.

Another, more obvious way is to earn links from those sources. How do you do that? Well, it would be remiss of me not to shill our own Digital PR service.

Post August Core Update, Back to Square One

It appears that domains that showed signs of recovery as the August updated rolled out are now back to where they started with most of their gains being reverse.

Glen Gabe put together a great indepth analysis video on the topic that I recommend people watch to understand a bit more about what happened.

Why did things look like they recovered and then go back down again?

Whilst we don’t know the exact functions of the algorithm, we do know some of the technology at play when a core update happens. The biggest one that will make rankings go up and then come back down again will be machine learning in the SERPs.

Essentially, when a core update happens the machine learning is “reset” on various queries. This creates huge volatility in the SERPs while google gathers user click data to decide which URLs should rank for queries under the new rules of the core update.

My good friend and SEO expert, Arnout Hellemans, showed me this when it comes to his name. He shares his name with Arnout Hellemans Hooft a dutch author from the 17th century. When the machine learning “resets” the old guy ranks for the first day or two, then the machine learning kicks in and the Arnout the SEO takes over the SERP.

Really cool but totally impossible to diagnose or forecast.

So our advice is always the same when it comes to Core Updates. Regardless of what is happening to your traffic – DO NOT DO ANYTHING. All of your updates will be confused with all of Google’s updates so their will be now way to test if your optimisations help or hinder.

Anecdotally, we see much more severe fluctuations during core update for domains with a low brand search volume and poor brand footprint. So it appears, that building obvious brand signals and having a diversified marketing mix is the best defence against traffic loss.

Related Posts

Subscribe to our Newsletter