In our increasingly digital society, the algorithm seems set to replace wisdom in human reasoning. While we are seeing some pushback against the movement to “algorithmicize” everything, few lay out explicitly the limitations as well as the benefits of the algorithms increasingly used to make decisions.
Recently, the Wall Street Journal ran an article on the current contentions within Netflix between “the Algorithm” (Netflix’s data-driven decision-making model) and “Hollywood” (the face-to-face deal-making that dominates the movie and TV businesses). For instance, the Algorithm was in favor of canceling the GLOW series, due to the lackluster performance of the comedy featuring women’s wrestling, while the Hollywood side believed that Netflix needed the show to bolster its standing in the cinematic community.
As you can see from the description of the dispute, the Algorithm and Hollywood approached the question from completely different points of view. The Algorithm offered a narrow view (ratings) while Hollywood offered a “big picture” perspective.
That is typical for algorithms. They do what they are asked to do. The set of parameters that an algorithm can address is, almost by definition, predefined. The algorithm takes in a set of variables, such as how often a viewer watches a show, how likely the viewer is to return, and how likely that viewer is to find out more about the show, etc. These sums are used to determine the “value” of the show to Netflix users.
Hollywood, on the other hand, uses a different set of criteria: If we agree to this or that market demand, how does it affect our brand in the long term? How does it affect our ability to source material from the best actors? How does it affect our ability to generate free publicity in the media? How does all that affect our standing compared to other choices that users have made all together?
This narrow-vs-wide scope is typical of discussions about quantification vs. wisdom in many forms and fields of inquiry.
There are three situations where quantification often fails. The first is fairly obvious—not every aspect of a decision can be quantified or even specified. In many real-life situations, there are simply too many variables that could impact a decision to specify them all. Additionally, even if you could specify them all, including them (and how they affect results) in the analysis is simply too computationally costly. In the Artificial Intelligence community, this is known as the “Frame Problem.”
Another problem arises with recursion. Algorithms are excellent for a set of known, finite steps connecting the question and the answer. However, the results are trickier when the steps are based on feedback loops (recursion). For instance, it is not hard for an algorithm to determine the effect of seeing “Image A” or “Image B” on the user’s likelihood of clicking on a story, a TV show, or a movie. However, a deeper question is, how does repeated viewing of certain types of images change the user’s preferences over time?
Assume that we have determined that images with more red content attract more viewers. Very well, but if we apply that reasoning to all the images we present, then all of them will be more red. How does that global change affect the user’s view of the color red? Might the change shift viewers’ preferences so that, in the future, they prefer blue more often? Or might they choose to abandon this crazy platform where all of the icons are red? Computers are excellent at first-order reasoning around what makes people more likely to click on something right now. They are terrible at second-order reasoning around how the decisions they make today will affect the workings of the process in the future.
The third problem is connecting quantifications with reality, a subject we will tackle in a later article.
For problems that require first-order thinking (problems for which the simple, apparent state of the facts is an accurate picture), algorithms are clearly faster and more accurate at making decisions. However, decades of study in artificial intelligence show that wisdom still maintains the upper hand in higher-order thinking, where the immediate, apparent state of the facts may not be correct, complete, or even quantifiable. Even so, algorithms can sometimes provide clearer inputs to these latter kinds of decisions, although they cannot provide the entire basis for the decisions.
Quantitative analysts (“quants”) are often mesmerized by the simplicity, beauty, and power of the systems they create. It is easy to be overwhelmed by stacks of data that seem to proclaim that the experts know the right thing to do. Sometimes the experts are right. Wisdom will tell us when to listen and when their field of view is too constrained by their tools.
Note: “The Mathematical Difference between Wisdom and Science” offers some helpful distinctions.
Jonathan Bartlett is the Research and Education Director of the Blyth Institute.
Also by Jonathan Bartlett: Did AI show that we are a peaceful species triggered by religion? No, but this episode shows how science media sometimes help mislead the public
Self-driving vehicles are just around the corner On the other side of a vast chasm…
Guess what?: You already own a self-driving car. Tech hype hits the stratosphere