Inherent and Transient Difficulties in Software Engineering
Note The software engineering book The Mythical Man Month does an excellent job of exploring and discussing this topic. It is highly recommended reading for those looking for a good grounding in general software development methodologies.
Transient Difficulties and the Tools That Solve Them
Some of the challenges facing software developers can rightfully be described as "transient difficulties." These are problems that improved development tools can mitigate. Debugging is a great example of a transient difficulty. Over the years, huge advances have been made in improving developers' abilities to debug their applications as they run. This has moved debugging from a laborious task that was done separately with paper and pencil, low-level tools such as disassemblers, and debug-print statements along with brilliantly tuned intuitions to something that is now a very natural and interactive part of any modern development tool. Few developers using modern development tools today even think of debugging as a separate activity distinct from designing and writing their code. It is now all part of the natural process of software development, and developers switch smoothly from one activity to another. Not long ago this was definitely not the case, and certainly not for software running on devices. Once formidable, the difficulty of debugging code has today been greatly reduced. It was a transient difficulty, and better technology has addressed it.
Inherent Difficulties and the Methodologies That Address Them
The second class of software development challenges comprises those challenges that are best described as "inherently difficult." These kinds of challenges are at the heart of software engineering. Better development tools themselves will not solve these kinds of problems. Instead, these problems require a good methodology to guide the engineering effort and ensure that software projects succeed despite the challenges. A good example of an inherently complex challenge is algorithm design. Modern object-oriented development languages have made code encapsulation and organization much easier but they have not made algorithm design easy or automatic. Writing good algorithms is certainly aided by the richer set of base classes that modern programming frameworks offer, but designing fundamentally new algorithms is still hard work and will probably remain so for the foreseeable future. This is because algorithms are inherently purpose specific in nature and there is no general way known to translate your intent into the best possible algorithm; this can only be done in an automated way if the scope of the problem being solved is narrowed down greatly and specific tools are built for the task. The same holds true for writing multithreaded code. Better tools, programming languages, and libraries can help, and the problem can be solved for many well-bounded cases; however, a good general-purpose machine for breaking up generic problems into parallel pieces eludes us. It appears to be an inherently hard problem that requires careful deign and good methodology. Modern programming languages and graphical design surfaces can make developers more expressive, but they do not take away the need for good algorithmic design skills. These skills are needed to build the critical systems that drive the behavior and efficiency of software. The best that modern programming technology has achieved is allowing for the packaging of complex algorithms into reusable components and frameworks and the modeling of interactions between these components. This allows commonly used critical systems to be designed by experts and reused by generalists. Modeling technologies such as UML (Unified Modeling Language) and graphical design surfaces can make component design simpler and communication clearer between component authors and clients, but they do not remove the core complexity of good algorithm design. Tools will continue to get better to reduce the transient difficulties, but core challenges will remain. Components have great utility because they allow the reuse of hard work, but they do not make the hard work of designing the algorithms any easier. Component-oriented design is a methodology that has emerged to help software engineers address one of the most vexing problems facing software development. It is a software development methodology that advises developers and architects to separate their problems into discrete and layered components. Critical components and algorithms are identified and get the most expert design, coding, and testing. Higher-level generally less-rigorously tested code uses these components to provide essential functionality to the applications being developed. Componentization succeeds as a methodology because it allows the partitioning of applications. It allows engineers, architects, and managers to identify and concentrate on the most difficult algorithmic challenges. Like any methodology, componentization can be overused and misused. If everything possible is made into a separate component under the mistaken belief that the more components a project is broken into the better the engineering, the result will be overly complex interfaces between a myriad of different components. Modeling tools may help visualize this, but a complicated mess is still a complicated mess. Too many unnecessary components blurs the sharp focus that componentization is intended to place on ensuring the excellence of critical pieces. Methodologies must be applied wisely and with explicit goals in mind. A methodology is only useful if it can be measured against goals to ensure that its benefits are being realized.
Two Good Cases for a Component-Based Methodology
Working with XML and working with cryptography are two areas where componentization has effectively solved complex problems and significantly reduced the complexity of working with these technologies. XML parsing is a technical challenge where having a methodology of component reuse makes good sense. Very few developers write their own XML parsers for their applications for the sole reason that it is a difficult job to design and implement a great performing, highly robust, and general-purpose XML parser. If everyone who wanted to use XML in their application had to write their own XML parser, all kinds of small bugs and inconsistencies would be produced. This would defeat the interoperability promise that XML is founded on. Other than as an academic exercise in algorithm design (and it is a great exercise), writing your own XML parser is a fool's errand; you will never get it done as well as the people whose sole job it is to write and test one. Instead of having a software development process where everyone writes his or her own specific XML parsing routines, there is a design methodology that recommends using prebuilt and tested general-purpose components. Several native code and managed-code XML parsers have been created with very rigorous algorithmic design and testing processes. These few components are reused by the many application developers who want to use XML. This is a methodological approach to solving the inherently difficult problem of building a robust XML parser. Another great example where a methodology of reuse is favored over custom implementation is cryptography. Design of cryptographic algorithms is a complex and specialized art form. Conflicting with this specialist drive is the fact that every day the cryptographic functionality becomes more important to developers building common applications. It has been demonstrated that a great way to build an insecure application is to write your own cryptographic algorithms. Unless you are doing cryptographic research, building your own cryptographic systems is highly discouraged by good software engineering practice. Designing, implementing, and maintaining a cryptographic system that is secure, robust, and fast is very hard work. It would be foolish and error prone for everyone who needed cryptographic functionality to write his own cryptographic systems into his applications. So here we have another clear inherent difficulty in algorithm design that better tools do not solve for us. It is addressed not by throwing one's hands up in the air and saying, "Well, it's hard and that's it," but by coming up with a software development methodology for componentization of common difficult algorithmic problems coupled with a methodology that makes certain all members of the project team use these components in a consistent way. A further benefit of using an off-the-shelf component methodology to address these kinds of problems is reduced maintenance burdens. When flaws are found in these components (which they inevitably will be), they can be centrally fixed instead of having the same kinds of problems pop up across multiple software implementations. |
Any mobile software development project of sufficient complexity will need a good set of methodologies that address the inherent difficulties present in mobile software development. There needs to be a process for identifying the difficult problems and making sure they are getting addressed in the best way possible. Componentization addresses one specific type of software development challenge. Along with appropriate componentization, other complementary methodologies will be required to address additional challenges such as ensuring the necessary application performance, designing good mobile device user interfaces, and building robust mobile communications. Knowing how to spot what the inherent difficulties are and how to go about addressing them is the hallmark of a good development lead. To sum up, there are problems that development tools can help us with, and these problems will get easier every year as tools advance. In contrast to these transiently difficult problems, there are also inherently difficult problems that can be solved only by having the right software engineering approaches. This means having a methodology that makes sure that the most important problems do not get addressed in an ad hoc manner but rather get the full attention and deep consideration they require.
A Few Software Development Approaches Through the Years
Every successive generation of computing has had its own challenges and corresponding methodologies for success. These methodologies have been, and continue to be, based on both the available state-of-the-art in development tools as well as the specific kinds of solutions being developed. Initially, computing resources such as memory, registers, and processing cycles were very scarce, so writing the most compact and efficient algorithms was imperative. This is still a goal to strive for, but it is now balanced by the great long-term benefits of writing serviceable and reusable code. Today the goal is not to write the most efficient code but rather to write the most efficient code that is understandable, reliable, and maintainable. Batch Computing In the era of batch computing (that is, before interactive software), the algorithm was the king. It was possible and indeed essential to specify the inputs into the system as well as the outputs expected from the system before starting coding. Because the usage model was based on input -> processing -> output, a great deal of time was spent working on the core algorithm and the avoidance of unnecessary space or storage usage was a paramount design goal. This model has great merit and today is still the ideal to strive for in designing individual procedures that process information in a batch mode (for example, in sorting algorithm design). Individual functions can be designed this way but not complex and user interactive systems. This was the era of the "flowchart." Stateless Server Processing When building reliable server applications that respond to requests, the Holy Grail is to be "stateless." Your code gets a request, processes it, and then returns the results without any need to maintain state in between successive requests. This is very similar to batch processing in using the input -> processing -> output model. Complex modern Web applications of course do maintain state in between successive requests (for example, shopping carts on Amazon), and this is best achieved by managing the state in a very central encapsulated way so that as much code as possible runs in a stateless way. Event-Driven Interactive Computing Event-driven computing represents the computing model where an application is a long-standing conversation with an end user. Code is run to respond to user actions and requests; the application reacts to what the user does. In contrast to a batch application, an interactive application does not have a fixed termination time but keeps on processing new events triggered by the user until the user decides his or her session is over. This programming model is commonly called "event driven" because discrete events are generated based on user actions that the developer's application responds to (for example, mouse click events, list selection events, and window close events). A clean state management model is essential for building good event-driven interactive applications. |
|