3 min read

Digital Maneuver 20200218: Trusted systems, cyber security, and AI in warfare

Here's the first Digital Maneuver newsletter.

If you find this content useful, feel free to forward it to others you think will benefit.

If this was forwarded to you by someone and you're missing messages as they're sent, you can subscribe now.

If you have feedback you'd like to share, feel free to email me at adam@digitalmaneuver.com

In addition to what might be his most famous work (The Mythical Man Month), Fred Brooks was Chairman of the Defense Science Board Task Force on Military Software. They issued their 1987 report (PDF)

The SWAP study released in 2019 by the Defense Innovation Board cites it as covering the same problems we are trying to solve today, 33 years later. The DSB report was as true then as it is now when it stated

"The big problems are not technical. In spite of the substantial technical development needed in requirements-setting, metrics and measures, tools, etc., the Task Force is convinced that today's major problems with military software development are not technical problems, but management problems."

Reflections on Trusting Trust

As the subtitle says:

"To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software."

As long as software developers are able to write code, they have the ability to intentionally or unintentionally introduce security vulnerabilities into software. No amount of automated security scanning tooling can prevent such a thing. This has been known and well-publicized in the software community since at least August 1984 when Ken Thompson (co-creator of UNIX and C) delivered his Turing Award lecture to the ACM (PDF).

"You can't trust code that you did not totally create yourself... No amount of source-level verification or scrutiny will protect you from using untrusted code."

Facebook made this clear in an article on their Zoncolan static analysis tool.

The Zoncolan paper demonstrated the best way to find bugs was static analysis, followed by code review, and white hat or red teaming coming in third. Of course, they use all of these things in combination with each other to achieve optimal results.

This is also not a dismissal of using open source libraries or tools, since those have been evaluated by thousands, and in some cases, millions, of other developers. Rather it is a rebuttal to the idea that we can impose sufficiently-constraining automated scans that will detect all software defects.

While automated scans can pick up known vulnerabilities in software, they cannot prohibit the introduction of new vulnerabilities by developers themselves. It is critical to help senior leaders understand this in order to avoid

  • overconfidence in automated security tooling, and
  • excess time spent or wasted in building automated tooling that will not be effective anyway.

In addition to good software development practices, the best way to improve software security is a combination of

  1. static analysis tools
  2. automated scanning for known vulnerabilities
  3. code review by competent software developers
  4. continuous red teaming of deployed software

Widely used open-source software usually fulfills all 4 of those criteria by default, and we must ensure that software we create does as well.

Why are we so bad at software engineering?

This is an excellent article on the costs and trade-offs of speed in software engineering. Often, the costs of things like short periods of downtime or bugs in software are small in comparison with potential value delivered. These trade-offs need to be taken into consideration and kept in context for different software development domains. For example, the rigor required for a customer-facing web site is different than avionics software.

Mosaic Warfare: Exploiting Artificial Intelligence and Autonomous Systems to Implement Decision-Centric Operations

Article link (PDF)

"Decision-centric warfare assumes communications will be contested and often denied during military confrontations. Therefore, C2 relationships would follow communications availability, rather than attempting to build a communications architecture that supports a desired C2 structure, as in Network-Centric Warfare. Arguably, DoD’s efforts to build communications networks have failed in part precisely because they sought to impose a desired C2 structure through a ubiquitous and resilient network that is possibly unachievable and unaffordable."

For feedback or to provide contributions, you can email me at adam@digitalmaneuver.com