How Software Makers Will Push Back Against Reforms
Software makers will grumble but insurers may force their hand. That, however, is NOT the Big Battle…Veteran software developer David A. Kruger offered some thoughts on computer security recently at Expensivity and we appreciate the opportunity to republish them here as a series. On Friday, we looked at the claim that human data collectors should own your data because it is too complex for you to manage. In this final installment, we look at how tech companies will try to avoid actually having to change anything.
Preview of Coming Attractions
If policymakers start to move towards implementing the policies suggested above, there will be a pushback from software makers that are not HDCs. They will be unhappy about additional software development costs, and they will play the “It’s the cyberattackers, not us!” card, saying it’s unfair to hold them responsible for “unforeseeable” cybersecurity failures. Part One of this article was written to refute that argument.
Non-HDC software makers who license to organizations will have to negotiate defraying software development costs with their customers (the organization potentially harmed by their product), and most likely, both parties will involve their insurers—and that’s a very good thing. Insurers make their money by turning unquantifiable and unpredictable risk into quantifiable and predictable risk, and when it comes to hazardous manufacturing processes and products and compliance with laws and regulations, they do so by requiring insureds to implement technologies and techniques that are demonstrably effective. Software makers are likely to quickly change their software development priorities if they must do so to retain or obtain insurance coverage. When it comes to cybersecurity risk, working out rational risk sharing and engineering best practices between software makers, their customers, and their respective insurers is long, long, overdue.
Wall Street, J. Edgar Hoover, Langley, and Fort Meade
The pushback from HDCs, especially big tech HDCs, will be swift, brutal, loud, extremely well-funded, will include hordes of lawyers and lobbyists, but also some players you may not expect—Wall Street, and certain elements of the intelligence and law enforcement communities.
Why would Wall Street get in involved? The market capitalization of HDCs that depend primarily on the unfettered collection of raw human data to generate advertising revenue (Google, Facebook, and others with similar business models), isn’t predicated on their technology, intellectual property, or their physical plant, it’s predicated on the value of the human data they hold and their unimpeded ability to continue collecting it. The value of their human data holdings will plummet unless it is continuously “topped up” with raw human information. Why?
Users are dynamic. They are exposed to new information that impacts their current beliefs and behaviors, and it is precisely that malleable nature of human beings that algorithmic nudging exploits. If nudging algorithms are starved of new information, they cease to function in real time. The long experiment ends, and the guinea pig reverts to being an unmanipulated human being. The efficacy of manipulative advertising declines and it takes ad rates with it. Remember that prior to discovering that users could be algorithmically nudged and addicted, ad targeting was based on a relatively simple analysis of what users were currently searching for. Without continuous topping up, HDCs will have to revert to that model, and the historical data they hold would quickly lose value. Furthermore, if policy changes make HDCs liable for breaches of all that obsolete personal data they hold, the data would become a liability, not an asset. Why continue holding it?
The extreme sensitivity of HDCs to the loss of continued real-time raw human data collection was recently demonstrated by Facebook’s valuation losses after Apple gave users a choice to not be tracked by Facebook among others. Facebook lost $250 billion dollars of market cap. Wall Street is not favorably disposed towards those kind of shocks, so some are likely to pushback using familiar arguments along the lines of “They must be protected! They are too big to fail!”
Going Dark
When it comes to cybersecurity, law enforcement and the intelligence community are divided into two camps: those responsible for keeping data safe, and those who want unfettered access to data to protect individuals and the country. The latter group will lobby hard against the root cause fix described in Part One because it requires ubiquitous strong encryption to protect data in storage and in transit. The conflict that has been going on for decades is referred to as the “crypto wars.”
Based on past experience, the intelligence and law enforcement officials who disfavor ubiquitous strong cryptography will inevitably accuse pro encryption folks, including policymakers pushing for policies listed above, of aiding child pornographers, drug cartels, human traffickers, and terrorists. Pro-encryption policymakers should expect to be smeared. The anti-encryption narrative will focus on a special class of victims, not all victims.
The rhetorical trick anti-encryptors will deploy is to ascribe good and evil to a thing: encryption. They’ll say “Encryption is bad because bad people use it, and good people have nothing to hide, so they shouldn’t be able to use it.” To see how the trick works, let’s extend that logic to things we are more familiar with (1-2), then to encryption (3), and then show it “naked” (4):
- Child pornographers, drug cartels, human traffickers, and terrorists use planes, trains, and automobiles for evil purposes, so good people shouldn’t be able to fly, ride, or drive.
- Seat belts and airbags save many lives in traffic accidents, but in a few cases, they cause death, so cars shouldn’t have seat belts and airbags.
- Bad people use encryption to do bad things, so good people shouldn’t be able to use encryption to do good things.
- We can keep some people safe sometimes by keeping everyone unsafe all the time.
When anti-encryptors are called on the illogic of their rhetoric, they switch to “Encryption can be good if cryptographers can give us “exceptional access,” a magical method to void encryption on whatever they want whenever they want. Even if that were possible (it’s not, which is why it’s a magical method), you have to ask: is it a smart strategy to have a world-scale encryption voiding mechanism that perhaps can be stolen or figured out and replicated by your enemies? Probably not.
Finally, there are all sorts of encryption products and services sold all over the world available to everyone, good or bad, and encryption itself is applied mathematics that anyone with the right skills and desire can learn and use. Cyberattackers are already good at encryption—see ransomware. It’s impossible to prohibit bad guys from using encryption. So, how does prohibiting good guys from using it help?
End of the Road
There is a saying — “Once you know something is possible, the rest is just engineering.” That is applicable to the problem of cybersecurity. A certain resignation has set in, a belief that we just have to learn to live with ongoing and escalating cybersecurity failure and the loss of digital privacy. That’s not true. We know what the true cause of cybersecurity’s technological and policy failures are and that it is possible to start fixing them now. What remains to be seen is whether we, as a society, have the will to fix them and when we’ll get started.
Here are all thirteen segments in the series:
The true cause of cybersecurity failure and how to fix it Hint: The cause and fix are not what you think. David A. Kruger, a member of the Forbes Technology Council, says it’s getting worse: We’re in a hole so stop digging! Get back to root cause analysis.
What’s wrong with cybersecurity technology? Know your enemy: The target isn’t networks, computers, or users; they are pathways to the target —gaining control of data. The challenge: If a cyberdefender scores 1,000,000 and a cyberattacker scores 1, the cyberattacker wins, David Kruger points out.
Ingredients that cybersecurity needs to actually work Software makers continue to produce open data as if we were still living in the 50s, and the Internet had never been invented. Forbes Council’s David Kruger says, the goal should be safety (preventing harm) rather than, as so often now, security (reacting to hacks with new defenses).
Cybersecurity: Put a lid on the risks. We already own the lid. Security specialist David Kruger says, data must be contained when it is in storage and transit and controlled when it is in use. Cyberattackers are not the problem; sloppy methods are. We must solve the problem we created one piece of data or software at a time.
The sweet science of agile software development Effective security, as opposed to partial security, increases costs in the short run but decreases them in the long run. Software veteran: Getting makers to change their priorities to safer products safe rather than the next cool new feature will by no means be easy.
Computer safety expert: Start helping ruin cybercriminals’ lives. Okay, their businesses. Unfortunately, part of the problem is the design of programs, written with the best of intentions… First, we must confront the fact that software makers are not often held responsible for the built-in flaws of their systems.
The cybercriminal isn’t necessarily who you think… Chances are, the “human data collector” is just someone who works for a company that makes money collecting data about you. Did you know that his bosses have paid gazillions in fines for what he and his fellows do? Let’s learn more about what they are up to.
Sometimes, money really is the explanation. Today’s internet is a concentration of power, in terms of information, never before seen in history. The HDCs (human data collectors) treat us as guinea pigs in a thoroughly unethical experiment designed to learn how to manipulate the user most effectively.
How search engine results can be distorted Search providers such as Google are able to increase their ad revenues by distorting the search results delivered to users. Human data collectors (HDCs) have been able to evade responsibility for the preventable harms they cause by blame shifting and transferring risk to users.
How online human data collectors get free from responsibility Cybersecurity expert David A. Kruger talks about the Brave Old World in which you have much less power than Big Tech does. For Big Tech, government fines and other censures are merely a cost of doing business, which makes reform difficult at best.
Cybersecurity: Why a poke in the eye does not work. The current system punishes small businesses for data breaches they could not have prevented. Computer security expert David Kruger says the current system makes as much sense as fining the hit and run victim for not jumping out of the way.
Is your data about yourself too complex for you to manage? That’s the argument human data collectors (HDCs) make for why they should be allowed to collect and own your data. Policymakers should declare that human data is the property of the individual, not of the data collector, computer security expert David Kruger argues.
and
How software makers will push back against reforms Software makers will grumble but insurers may force their hand. That, however, is NOT the Big Battle… the Big Battle: Wall Street will oppose reforms that restore control to you because the market cap of Big Tech depends on human data collection.