Jump to ratings and reviews
Rate this book

A Guide to Selecting Software Measures and Metrics

Rate this book
Going where no book on software measurement and metrics has previously gone, this critique thoroughly examines a number of bad measurement practices, hazardous metrics, and huge gaps and omissions in the software literature that neglect important topics in measurement. The book covers the major gaps and omissions that need to be filled if data about software development is to be useful for comparisons or estimating future projects. Among the more serious gaps are leaks in reporting about software development efforts that, if not corrected, can distort data and make benchmarks almost useless and possibly even harmful. One of the most common leaks is that of unpaid overtime. Software is a very labor-intensive occupation, and many practitioners work very long hours. However, few companies actually record unpaid overtime. This means that software effort is underreported by around 15%, which is too large a value to ignore. Other sources of leaks include the work of part-time specialists who come and go as needed. There are dozens of these specialists, and their combined effort can top 45% of total software effort on large projects. The book helps software project managers and developers uncover errors in measurements so they can develop meaningful benchmarks to estimate software development efforts. It examines variations in a number of areas that Filled with tables and charts, this book is a starting point for making measurements that reflect current software development practices and realities to arrive at meaningful benchmarks to guide successful software projects.

372 pages, Hardcover

Published April 10, 2017

3 people want to read

About the author

Capers Jones

30 books11 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
0 (0%)
4 stars
0 (0%)
3 stars
0 (0%)
2 stars
1 (100%)
1 star
0 (0%)
Displaying 1 of 1 review
Profile Image for Paul Black.
315 reviews2 followers
April 15, 2020
A thorough review of software PROJECT measures and metrics. Much useful information and warnings to avoid wasteful or misleading measurement.

That said, I was disappointed in many ways. The title is misleading. The vast majority of information was about software DEVELOPMENT, that is the PROCESS, not software per se. Jones often repeats the same whining and criticisms, often within a few pages. For instance, the preface (pp vii & viii) gives a list of 35 specialists, which is repeated as Table 4.1 pp 36 & 37. There are many typos, math errors (both arithmetic and conceptual), unwarranted precision, and waste. Perhaps it is just sloppy editing. But for someone who harps so much and criticizes others for mistakes and imprecision, these things undermine my confidence in his methodology, data collection, analysis, and conclusions.

(I read the hardcover edition, not the Kindle.)

Table 4.1 is just a list, so putting it in table format wastes space and confuses the reader.
Figure 4.3 (page 40) lists schedules against team sizes. The legend has "Series2" and "Series1", but I don't believe any "Series1" is given. In addition, it would have been nice to multiply the team size by number of weeks (months?) to show total man-weeks (-months?). This calculation shows the following
Size #Weeks
10 130
9 126
8 128
7 126
6 126
5 115
We can see that the added cost (which is why the figure is given--top paragraph, page 41) is not that much.

I found many typos, for instance, page 6 references Table 1.4 (twice!) when it is clearly Table 1.5.

I found many errors, some simple math mistakes and others more subtle, but significant. For instance, Table 1.6 (pp 12 & 13, referenced on page 11) says the Team Percent of Total is 100.00% and User Percent of Total is 35.91%. If Team + User is the *real* cost, then team cost is really 74% and user cost is 35.91/135.91 = 26% of total.
The Total occupations at the bottom of Table 4.2 (pages 39 and 40) are wrong. They should be 4, 6, and 9. (Perhaps a simplistic spreadsheet macro counted the "Total staff" row as non-zero and included it as an occupation?)
Table 5.1, page 4.9, gives Average of Data Accuracy. This is a meaningless statistic! Analogously, one could find the arithmetic mean of a set of telephone numbers, but the mean (probably) has no value.

He repeatedly gives unwarranted precision. Table 2.2 (pages 20-27) gives Risk % with four digits (page 24)! I wouldn't be surprised if their counts justify that precision, but is it enlightening to say that the risk of schedule slip for a medium sized project is 18.97% instead of 19%?
The Total occupations at the bottom of Table 4.2 (pages 39 and 40) are counts. They should be 19 and 20 instead of 19.0 and 20.0. The decimal places are silly to the point of being wrong!
Table 7.1 pages 60-62 gives function points per month and work hours per function point to two decimal places. I would be astonished if the data collected really justifies that precision. Table 8.1 pages 64-67 repeats the same two decimal places.

Jones claims that "software reuse is the ultimate methodology that is needed to achieve high levels of productivity, quality, and schedule adherence" (Chap. 9, page 69) He cites mass produced parts. I disagree. In general, if there are enough "packages" (modules, libraries, whatever) to cover one's needs, then it becomes hard to *find* an appropriate package. If a package actually *does* cover what you need (without a lot of rework), it takes a lot of time to understand it and adapt it to make sure that you are not misusing it and therefore embedding bugs. (The software analogy to mass produced parts is copies of software apps. It is so easy and productive, that nobody thinks twice about looking for an app to translate languages or play chess or recommend books. "We" have millions of copies of apps.)
Yes, one can engineer, say, TLS stacks for reuse, but they require (a) a very clear behavior to be implemented, (b) lots of development time (Brooks(?) says writing for reuse is twice as hard as writing for one-time use. Someone else said that something has to be (re)used at least three times before it is "reusable": you have to find out what other systems need, what interfaces make sense, and where the boundaries of functionality should be), and (c) lots of maintenance. That's NOT just, oh, here reuse this bit of software to save a bunch of time and have great quality.
Application generators, like compilers, or model-based software can be thought of as software reuse, but they are even harder to develop and may be very hard to use ("How do I get it to do this??"). They are great ideas, but the scope are even narrower and development and maintenance costs are even higher.
Displaying 1 of 1 review

Can't find what you're looking for?

Get help and learn more about the design.