Mind Matters Natural and Artificial Intelligence News and Analysis

# Can We Make Money by Losing Money?

Yes, sometimes. Human ingenuity isn’t simple arithmetic

The old joke runs, “We lose money on every sale, but we make it up in volume.” Clearly, the point of the joke is that if we lose money on every sale, a higher volume will lose more money. However, the joke misses an important aspect of reality: Sometimes, you really can make it up in volume.

We tend to have a static picture of reality. However, human ingenuity can change reality and we do so constantly. We can even change ourselves. Most important for understanding this problem, we can learn.

Think about the first time you did a do-it-yourself job in your own home. I confess, for me, it took hours. I had to make about twelve trips to the hardware store just to get the right pieces. My efforts barely got the job done. But, afterward, on the next similar project, I didn’t need as many trips to the store. I bought fewer of the wrong pieces. I didn’t have to refer to the instruction manual as many times. The job was both faster and cheaper.

This fact of human existence—that, as we do things more often, we get better at them—has long been recognized in business as the “learning curve.” A “learning curve” represents an individual’s or group’s road to competency. As with my do-it-yourself experience, the first time you do anything will be a complete mess but, eventually, after enough practice, you will become competent.

However, the learning curve model has deeper implications. It turns out that humans don’t just go from “incompetent” to “competent” at a given task. We develop new ways of doing things that didn’t exist before. These new developments are part of the learning curve concept.

A rule of thumb for the learning curve is that you can achieve an average 20% decrease in costs every time you double your output. But this rule tells us nothing about where that 20% will come from. It tends to originate from a variety of sources, for example

• eliminating production inefficiencies
• finding equivalent materials at lower cost
• drafting better supplier agreements based upon volume
• sometimes, even rethinking the whole product.

Take the transistor. When transistors were first produced in the 1950s, manufacturing yields were extremely low—only about 20% of manufactured transistors even worked.

In the 1960s, transistors, depending on who you asked, cost from $8 to$30 each, adjusted for inflation. However, sellers realized that transistors would become more popular if they cost less to produce and sell. As a result, more people studied transistors. They came up with the idea of the idea of the integrated circuit, where transistors are made on wafers of silicon. This reconfiguration of the transistor led to drastic reductions in price-per-transistor.

As technology improves, the number of transistors we can fit in a particular area of silicon continues to improve, thus driving down the cost-per-transistor. Today, a single transistor in your computer generally costs about \$0.00000001.

In technology, the learning curve is commonly called “Moore’s Law” but the basic principle applies nearly everywhere. That’s why some high-growth companies choose to sell their products for less than the cost to manufacture them. The premise is that the increased unit sales from lower prices will advance the learning curve enough that the products will eventually become profitable. If they own the learning curve, others won’t be able to compete. They are literally planning on losing money on every sale but making it up in volume—and sometimes it works.

While the learning curve is not an exact science, the interesting part, in my view, is that such a non-physical mechanism can be measured at all. The learning curve doesn’t tell you where your efficiencies will come from. It doesn’t tell you how people are going to re-envision your product to make the jump from millions to billions of units. It doesn’t tell you where your supplier agreements will come from. Instead, it is more of an insight into the human mind and how it can take observations and combine them into useful insights. Not every insight requires a mechanism, and learning to be comfortable with trusting in things we can’t directly control will be key to the future of technology.

Also by Jonathan Bartlett:

Self-driving cars need virtual rails

and

Bitcoin: Is lack of trust the biggest security threat?

Also: Walter Bradley Center Fellow Discovers Longstanding Flaw in Elementary Calculus Jonathan Bartlett: The flaw doesn’t lead directly to wrong answers but it does create confusion.