Why It's Time for FP++Why It's Time for FP++
Finding a measure to effectively size applications has never been more complicated: Here's how using a 30 year old concept can help.
If I can write 100 lines of code per hour, but my coworker can only write 50, it seems obvious that I'm a more productive developer, right? Similarly, if I can complete a project that was scoped to be 250 man hours in only 200, am I just incredibly talented or was the scoping conservative? Although these questions are hypothetical, effective software development management necessitates addressing the underlying issues that cause the ambiguity in answering these questions. Function points put us on the right track, but still fall short in our current age of enterprise computing.
Everyone understands that effective management requires effective measurement. Just as CEOs require financial measurements to run their organizations, without these measurements, our ability to identify and address issues is greatly diminished. With the failure rate of IT development projects spinning so out of control that they're either killed, or kill the budget and timeline, the need for these measures has never been more profound.
Specifically, we need to determine an effective means to quantify software applications into a consistent set of units based on the development effort required to produce the intended functionality. This concept can then be extended to improve your organization's software development capabilities by examining the following information:
Size. An application's size would be determined as the base-level measurement with respect to other applications both in a particular organization and industry. Size information will then drive the other metrics listed here.
Productivity. The number of work units completed divided by time. Productivity information drives decisions to questions, such as which developers complete the most amount of work in a constant amount of time, what's the optimal project team configuration (with regard to skill set and experience) for projects of a certain size, should consultants or in-house staff be used, and has a certain tool, framework, or methodology helped complete projects faster?
Quality. Quality can be determined by looking at ratios such as the number of defects per application size or the uptime of applications by size. The capability of a quality assurance group can be examined by comparing the ratio of the defects found during preproduction divided by the application's size, to the same ratio of defects found after production. (You obviously would want the former to be a much larger ratio).
Scoping and budgeting. Having a consistent means to measure an application's size will greatly improve scoping and project estimation (as someone scoping can estimate the project timeframe based on the timeframe of projects of a similar size, or use calculated productivity values if data on such a project isn't available). I would envision that organizations with a culture where both the business and the IT organization are accountable for the success of a software development initiative would allocate budgeted funds based on a project size (and understand that additional funds will need to be allocated in the event that the project size changes).
Maintenance effectiveness and requirements. In addition to better understanding the scope of developing applications or a certain size, you can also measure the maintainability of an application by looking at metrics such as maintenance hours (or dollars) over project size. (This assumes that the nature of the maintenance work is relatively consistent across commonly skilled resources: These activities include items such as installation, log archiving, backups, auditing, manual processes, and so forth.) Furthermore, armed with data such as the annual maintenance cost per application size based on organization or industry metrics, you can accurately estimate what will be required to maintain applications.
The Challenges
However, unlike business financial measurements such as revenue or profit, the concept of determining a measure to effectively size applications is quite difficult. This is because of the myriad of approaches you may take to develop and implement an application, general ambiguities in the requirements, and external influences such as partner systems, the organization's architecture, politics, and so on.
The commonplace measurements that attempt to address this need fall short for the following reasons:
Lines of code. In my opinion, this measurement tops the list of not-very-meaningful items that IT managers track in terms of their pointlessness (with the possible exception of measuring the disk space requirements of your version-control system). The flawed assumptions in this measurement are that programming a piece of functionality will be done consistently regardless of the programmer or the program complexities, and that each line of code is just as complex as every other. Even neophytes can see that these assumptions are ridiculous just by reading the first chapter of a "Teach Yourself How to Program" for a few different programming languages.
Man hours. Man hours is a very close second on the list of metrics with little value as it is difficult to: accurately estimate the number of man hours for a software development project up front; because it's dependent on the individual programmer, project team, or tools being used; assumes that tasks are sub-dividable into teams of varying sizes. (Fred Brooks explains clearly why this isn't the case — see Resources.) Furthermore, the man hours statistic provides no capability to measure the information listed previously: What does it mean to finish a 100-man hour project in 80 man hours?
Relative hours. Due to the difficulties in quantifying an application, many IT managers simply measure the relative time allocation of their development teams. Managers look at how many hours their developers spend coding vs. designing vs. spending time in meetings. Although measuring relative hours does have some merit in identifying and solving issues in an IT department (such as too many meetings), the data is rarely accurate (as developers are responsible for tracking their own time and would probably be hesitant to admit that they spent eight hours in one week trying to reorganize the content of their hard drives) and only looks at time (after the fact), which makes it impossible to ascertain many of the measurements useful in project scoping and estimating.
Code branches/code-level metrics. Some organizations will conduct automated analysis on their software for the purpose of collecting metrics. Items such as the number of branches through the code or code complexity values are calculated based on heuristics. This information is better than the man hours or lines of code measurements as it separates the size of an application's functionality from space (code) and time (man hours). However, in most cases, these metrics can only be collected after the application has been completed and aren't consistent across different technologies, especially technologies that can assist in building applications that aren't code based.
Enter Function Points
Developed by Allan Albrecht for IBM in the 1970s, function points were designed to solve the problem of quantifying the size on an application's functionality. While not getting into the nitty gritty of the specifics of how they work, function points are counted for an application based on a set of rules. The rules are related primarily to data flowing in and out of the application. By counting the function points of an application, you can gauge the size of an application's functionality and use it for the purposes listed earlier.
Some of the specific merits of the function points counting approach include the following:
Consistency. Function points are designed such that if two independent analysts do a count of the function points within an application, they should arrive at a very similar result.
Life-cycle friendliness. Function points can be counted at any point during the life cycle, from requirements gathering or at a completed and deployed application.
Independence. The nature of function points is that they're independent of other development factors (time, personnel skill sets, investment dollars, lines of code, architecture platforms, and so on).
Time for FP++
Because it's been nearly 30 years since function points were created, they have some deficiencies with respect to modern day development. First, function points worked well when enterprise applications consisted primarily of data coming in through screens, simple processing (either online or in batch), and data then going out via screens or reports. Although not specifically stated in the literature, it seems clear that function points were also designed under the assumption of structured analysis and structured design on a mainframe system.
Today's applications can be Web based, use frameworks and third-party packages, run on many different platforms, be available continuously, integrate with known and unknown partner systems, include complex algorithms, and reside in complex architectures. Although the function points specification does provide a list of catch-all adjustments that can be made after the base count to account for complexities in the software (especially complexities unforeseen when the function point counting method was originally designed) these adjustments aren't commonly used, and when they are used, they're subjective and, thus, result in inconsistent counts.
Additionally, it's often laborious and expensive to conduct a function point count, especially on preexisting software. Most important, the preeminent function point showstopper is political. Because it hasn't been accepted into the mainstream, function point count use is so limited that many managers don't see the need to learn to do the counts and use them.
What the ++ Entails
In the spirit of a columnist, the strengths of my capabilities lie more in pointing out problems than in solving them! Because I don't have a solution to determine how to either extend function points or take a similar approach that would be as effective at measuring application functionality sizes in today's environments, I'll present a few additional requirements that draw upon the key merits of function points (consistency, life-cycle appropriateness, and independence):
The application functionality sizing should be consistent regardless of the technology used for development.
The application functionality sizing should take into account integration with other systems, service-oriented architecture usage, and the use of frameworks and third-party software.
The sizing process should handle efforts that target applications with different expected longevity appropriately (such that quick-and-dirty applications are sized smaller than applications that you expect to be in production for many years).
The quantification of the functionality size of a completed application must be able to be automated. It would be especially slick if tools such as UML designers could provide this capability as well.
Developing software for reusability should affect the application functionality size.
The sizing of more complex algorithms (I envision a relative scale based on a number of standard algorithms that can be used as comparison points) should be handled in a way that it appropriately affects the application size.
I hope that a reader who's smarter and more capable than I am (or at least one who has more spare time) will be able to develop a way to size applications within the boundary of these requirements. IT managers will celebrate in the streets once they have the ability to accurately measure an application's size and use that information to measure their departments. In the meantime, I'll keep looking for meaning in my ability to finish this two man-hour column in less than 90 minutes.
Robert Northrop was formerly a director of design and development with Tallan, a professional services company specializing in developing custom technology solutions for its clients. Northrop now is an MBA student at the University of Virginia Darden Graduate School of Business Administration.
Resources
Brooks, F.P., The Mythical Man Month, Addison Wesley, 1975.
The Function Point FAQ
ourworld.compuserve.com/homepages/softcomp/fpfaq.htmGarmus, D. and D. Herron, Function Point Analysis: Measurement Practices for Successful Software Projects, Addison Wesley Information Technology Series, 2000.
About the Author
You May Also Like