Becoming a Slave to Google: How It Happens
Part 4: After an update, you always have to second guess what Google did. You become a reverse engineer who never sees under the hoodThis article is reprinted from Bill Dembski, design theorist William Dembski’s Substack, with his permission.
As mentioned, I got into SEO content-based educational websites around 2010. It was a wild west back then—and easy, at least by present standards, to make money if you knew your way around the educational world, which I did. There was much low-hanging fruit, as they say. Things have become much more competitive these days, both because of the sheer number of entrants into the field and because of the way Google gobbles up and regurgitates the information it absorbs from SEO content-based sites, giving people less and less reason to visit sites that produce original content.

With regard to my SEO content businesses, I have strong positive and negative feelings about Google. On the positive side, there’s no way I could have made the money I did ten years ago but for search engines like Google helping people to find my websites. That said, Google’s control over my business and other businesses in my space was troubling even back then and became overwhelming over time. In the name of making the web better, Google would periodically update its algorithm. Their algorithm is not open source, so you never know what an algorithm update is going to do to your website. After an update, you’re always in the position of someone who has to second guess what Google did. You need to become a reverse engineer, but without ever seeing under the hood.
The rankings roller coaster
Thus, with one of my cherished websites, Google updates in 2014 and 2017 reduced my business by 50 percent or more. Other websites I knew of in my space were in some updates simply wiped out. When you get hit by a Google update that drastically undercuts your business, you are left reeling. All your attention is then on trying to figure out what got you penalized, redress it, and thereby recover your business. Perhaps you can get it back. Perhaps you can’t. In 2014, I had a lull of four months where my business was reduced by 50 to 60 percent. And then one weekend in May the business rebounded, and by the end of the summer my business was 400 percent over where it had been at its prior best.
But the hit I took in 2017 was much harder. I was optimistic that I could get the website back to reaching its former profitability. But the space in which my website operated was getting much more crowded and competitive. And the hassle of dealing with Google’s updates and never quite being sure whether you had run afoul of Google’s ever-changing standards were more than I wanted to deal with. Like an athlete who gets injured, I had to assess the seriousness of the injury that Google inflicted on the site. Was it brief? Was it season ending? Was it career ending? From what I saw with colleagues in my space, any time you got hit by Google, it could be any one of these.
So I sold the site. The payout included an earnout based on the site reaching and then exceeding its former glory. With my team being part of the acquisition deal, the former profitability plus some extra was achieved, and so I received my earnout. While I think I made the right decision in selling the site at the time, I didn’t like feeling that I was forced to sell because of Google’s arbitrariness in deciding which websites to reward and which to punish. Google obviously wouldn’t put it that way, portraying itself as calling the web to ever higher standards of excellence. But among small business owners like me, the perception that Google’s updates were in large measure self-serving and arbitrary was widespread. Moreover, without transparency from Google about how it was updating its algorithm, this perception is hard to refute.
The gray hats and the black hats
Ranking with search engines is a zero-sum game. If your site gets knocked lower in position on an SERP, another site is taking over your spot. As I saw webpages getting reshuffled on Google’s SERPs for given keywords, it was hard to discover a compelling rationale for the reshuffling. Sometimes the rationale could be discovered. I recall one site having a page that suddenly came to rank highly for a major keyword in the educational space. As it is, this page listed colleges that no other site had ever ranked highly in that keyword category. What gave the article traction with Google is that schools that had never ranked highly in that category suddenly felt gratified to be ranked highly. Consequently, they linked to the page, thereby signaling to Google that it should take note of the article, ranking it highly for that keyword. As it is, .edu sites get a high domain authority from Google, and so links from .edu sites are enormously helpful for bolstering a site’s SEO.

As a matter of public relations, Google would say that their updates are meant to reflect improvements in the quality of websites. And granted, many websites try to game Google using gray and black hat methods to raise their site’s traffic, and these methods do require some response from Google. Gray hat methods exploit ambiguities or loopholes in Google’s guidelines without overtly violating them. Examples include keyword stuffing (filling articles with multiple mentions of a keyword), purchasing low-quality backlinks, or cloaking content to appear more relevant to search queries. These tactics are risky because once Google catches on to them, it will update its guidelines to punish them, and the punishment can fall without warning.
Black hat methods, by contrast, flagrantly violate Google’s policies and rely on outright deception to game their algorithm. Examples include hiding spammy keywords in invisible text (one site I heard about put entire links in periods at the end of sentences), using link farms, or deploying bots to create fake engagement. Black hat methods also include actively sabotaging competitor sites, as by giving them toxic backlinks (such as having porn sites link to them). While black hat methods often achieve faster results, they risk severe penalties from Google, such as being delisted from search results entirely, compared to the more subtly unethical gray hat practices.
Bigger is seen as better
But having now endured many of Google’s updates (we currently get several a year), I sense less a desire on its part to make the web better and more a need on its part to simplify how it adjudicates merit, with a strong tendency to value big sites and those with the resources to continually update their content. The rich get richer and new entrants face ever more stringent barriers to entry. In the education space, for instance, the site I sold some years back has since lost most of its business. Monster sites like USNews, Forbes, and Niche as well as discussion forums like Reddit and Quora now suck up most of the oxygen in the educational space (after Google has taken first dibs at the oxygen through its ads).
As it is, Google’s artificial intelligence is simply not so good that it can actually determine what are the best pages that answer a keyword query. Google’s proprietary search algorithm, which determines the ranking of web pages in search results, lays claim to a number of patents, but in reality it is a trade secret, especially to the extent that it incorporates updates whose workings are entirely proprietary and opaque—like Panda, Penguin, and the more recent HCU (Helpful Content Update—for which I’ve yet to find a colleague who regards it as helpful).
Google’s publicly shared guidelines and policies, such as its Webmaster Guidelines, are supposed to describe best practices for webmasters to optimize their sites in line with Google’s stated vision for a rich and vibrant web. But dutiful adherence to the guidelines does not guarantee optimal rankings. What Google says it wants in websites, and what it rewards and punishes in them are often two different things. This leads to a slavish mentality that always tries to second guess whether Google will favor some piece of content or way of expressing it.
The case of USNews and ranking credibility
Also, Google assesses websites not simply via its algorithm. Google outsources over 100,000 jobs, and these include quality raters who look over sites to determine how they are doing with respect to such criteria as EAT (= Expertise, Authority, and Trust) Enduring presence, in the form of a website’s high authority over a long history, seems always to be a prime ranking factor for Google. The USNews rankings are not the best by any means, and yet Google rewards them with consistently high search results, in part because they’ve been around the longest and schools cannot afford to go down in their rankings (a college or university that goes down in an annual USNews ranking regards this as a tragedy and going up as a cause for celebration). Many academics in fact regard the USNews rankings as ridiculous, and for good reason, as the following video makes clear (don’t let its humor distract you from the truth):
Next: Google’s power over online business: Monopolistic and extravagant
Here are the previous instalments: The evilization of Google—and what to do about it Part 1: Understanding Google’s dominance over the internet. Nothing is totally evil. Still, there’s enough evil in Google that it is, for now, more on the side of Darth Vader than Obi-Wan Kenobi.
The utter dependence of online businesses on Google. Part 2: An SEO business needs to please Google or else it is dead in the water. Google’s re-presentation of information created by others makes it less likely that users will visit them. Thus Google’s business expands at their expense.
and
A potential chink in Google’s armor: Loss of legal immunity Part 3: Currently, Google is legally protected from the consequences of frequent copyright violation. One outcome of the resulting ad clutter is that, unless you are top of the search, you’re likely wasting time trying to make money off organic search.