Advertisement

Defense Innovation Board proposes new metrics for assessing DOD software development

"What gets measured gets managed," after all.
Department of Defense, DOD, Pentagon
(DoD photo by Army Sgt. Amber I. Smith)

“What gets measured,” the saying goes, “gets managed.” And if you want to change the way an organization goes about pursuing its goals, one way to do so is to change the metrics it tracks.

The Defense Innovation Board, since it was established in April 2016, has been focused on helping to change the way the Department of Defense thinks about technology and innovation. Specifically, of late, attention has turned to evolving the way the DOD acquires software.

At a public meeting in April in Cambridge, Mass., the advisory board, which is led by Alphabet technical adviser Eric Schmidt, introduced an initial version of what it calls the Ten Commandments of Software — 10 suggestions for how DOD should approach and think about software acquisition. These include suggestions like “adopt a DevOps culture,” “make compute abundant” and more.

Now, the DIB has some preliminary ideas for how DOD should start measuring whether its software development work is successful. During a meeting at the Defense Innovation Unit Experimental offices in Silicon Valley on Wednesday, board members Richard Murray, a professor at the California Institute of Technology, and Michael McQuade, a former senior vice president for science and tech at United Technologies, unveiled the proposed metrics.

Advertisement

“The current state of practice within DoD is that software complexity is often estimated based on number of source lines of code (SLOC), and rate of progress is measured in terms of programmer productivity,” the board wrote in a draft list of the proposed metrics. “While both of these quantities are easily measured, they are not necessarily predictive of cost, schedule, or performance.”

“As an alternative, we believe the following measures are useful for DoD to track performance for software programs and drive improvement in cost, schedule, and performance,” it says.

The metrics can be broken down into four broad categories — deployment rate metrics, response rate metrics, code quality metrics, and program management, assessment, and estimation metrics.  The DIB also provides general timeframes for what a “good” score looks like for each metric.

On deployment rate, the DIB suggests that the DOD measure things like “time from program launch to deployment of the simplest useful functionality” and the time necessary from when code is committed to when it is available for use in the field. The DIB urges the DOD to shoot for timeframes of days, weeks or months instead of years, but acknowledges that the time necessary will depend on whether the software in question is commercial off-the-shelf, fully customized or somewhere in between.

The response rate metrics are intended to measure the time it takes to get a program back up and running after failure; code quality measures things like the number of bugs caught and the percentage of the code available to the DOD for review; the program management section, finally, measures the number of developers on a project, the skill level of those developers, the rate of change in the mission area and lots more.

Advertisement

Overall the DIB pushes the DOD to work more quickly with software and diversify the data it collects on how things are going.

At the meeting, the board debated each metric and members offered thoughts on what else might need to be measured. The board made it clear that this, like the Ten Commandments of Software, will be an ongoing conversation.

“This is early thinking,” McQuade said. “Clearly there are a lot of questions that still have to be discussed.”

Latest Podcasts