Writing Mobile Code Essential Software Engineering for Building Mobile Applications [Electronic resources] نسخه متنی

اینجــــا یک کتابخانه دیجیتالی است

با بیش از 100000 منبع الکترونیکی رایگان به زبان فارسی ، عربی و انگلیسی

Writing Mobile Code Essential Software Engineering for Building Mobile Applications [Electronic resources] - نسخه متنی

Ivo Salmre

| نمايش فراداده ، افزودن یک نقد و بررسی
افزودن به کتابخانه شخصی
ارسال به دوستان
جستجو در متن کتاب
بیشتر
تنظیمات قلم

فونت

اندازه قلم

+ - پیش فرض

حالت نمایش

روز نیمروز شب
جستجو در لغت نامه
بیشتر
لیست موضوعات
توضیحات
افزودن یادداشت جدید











The Importance of a Disciplined Approach


Thomas Edison, no slouch himself at building useful devices, is reported to have explained, "Genius is 1 percent inspiration and 99 percent perspiration." Achieving good performance is 80 percent discipline and 20 percent creativity. Discipline in your development process will allow you to find your performance problems and prevent you from ignoring or deferring them. Ignoring performance problems will be tempting; fixing performance problems does not add features and is not perceived of as sexy design work. Nevertheless, addressing performance in a methodical way is critically important if you are going to succeed in the mobile development work you undertake. Disciplined work will allow you to both uncover performance problems early as well as find solutions to them before they get ingrained in your design. Many of these problems can be solved by adapting the ideas presented here and found in further research by you. In cases where no useful precedent exists, the occasional burst of creative genius may allow you to see a new solution that no one has thought of yet and allow you to break new ground in your design. The most important thing is having a disciplined approach that forces you to confront the performance problems that will inevitably come up and keep at them until they are solved. When testing out filaments for his light bulb, Thomas Edison went through hundreds of test filament materials before he settled on the one that provided the best combination of incandescence and longevity. So it is with performance: It is all about testing and learning what works. As Mr. Edison showed, focus, persistence, and creativity pay great rewards.

Define the User Scenarios That Count


You should have a list of what the key user experience scenarios are. Some of these may be general in nature, such as the following:

The user will never be left waiting for more than 0.5 seconds without some kind of visual notification that things are happening.

The user will never be left waiting for more than 4 seconds without the ability to abort a long duration operation.


Some scenarios will be very specific:

Starting a new game of chess should not take more than 1 second.

Accessing a customer's order information should not take longer than 3 seconds.


Write down the scenarios in your main design document. Whether you are working alone or in a group, it is always useful to have a written reminder of what the experience of using your application is supposed to be. This list may change over time to reflect things you learn in testing your application, but the value of having the key scenarios explicitly listed in one place is worth the effort.

Use Software Development Milestones with Performance-Driven Exit Criteria


Having concrete milestones to measure your progress toward completing your application is a proven mechanism for building and shipping great software. Without milestones to measure your progress against defined metrics, you are just wandering toward your goal, sometimes making progress, sometimes moving askew, and sometimes regressing. Milestones are both a way of marking and celebrating the progress toward your goals as well as preventing difficult but necessary decisions from being indefinitely deferred.

To be effective, milestones should define the user scenarios that must work at the exit of the milestone, the features that must be "code complete," and set specific performance metrics that must be met in order to exit the milestone. Getting to a milestone is usually relatively easy; it is the point where your development team writes (or you write, if you are working alone) the necessary code to make the things work that you have outlined in your milestone's goal list. The real work is successfully "exiting the milestone"; this is where disagreements, contention, and poor compromises can arise. The basic code may be written to enable a scenario, but does this really meet the spirit of the milestone? Is the code cobbled together and full of so many hacks that it cannot really be considered code complete and is the performance "good enough to ship"? These are the key questions that need to be answered. This is the point where real leadership and discipline need to be applied to make sure the project is truly on track to exit the milestone and move another big step toward successful completion.


A Few Words on Meeting Development Milestones


It is important to ensure that a project does not limp across a milestone but rather strides across it confidently and with team members proud and in agreement of what was accomplished. There will be a temptation to put off performance goals because "they are not really features"; don't do it! Experience has shown over and over again that it is far better to cross a milestone confidently but behind schedule than it is to limp through the milestone because the goals have been compromised.

Making a milestone's goals easier to achieve does nothing to bring the end goal closer. When faced with a schedule crisis, you only have three choices:

Cut product features to allow concentration on the remaining features.
If you are going to end up cutting a feature, cut it early and concentrate on doing an excellent job with the remaining features. Cutting features late can be very painful for the people who were working on the code, for the end users who expected the features, and for the rest of the product, which may have dependencies on the feature. Because cutting features is difficult, it must be done correctly, as early as possible, and with leadership.

Extending the schedule to something more realistic based on the facts you now have at hand.
You will learn things as you go along that will enable you to better adjust the schedules. If the milestone's schedule needs to be moved out along with the rest of the product's schedule, do it decisively and realistically. The end result needs to be a new schedule that people believe in.

Finding ways to level the workload among members of the team.
It is almost never effective mid-cycle to bring additional members who are not familiar with the work already going on onto a development team. However, it may be possible to better distribute the work among members of the team. As with the two previous points, realism, promptness, and decisiveness are important to making workload decisions. It is better to have one painful workload-leveling exercise than a continuous series of them.

From a performance perspective, exit criteria for milestones are critical for the simple reason that the performance of an application usually cannot be made significantly better later in the application's development cycle without drastic redesign. The common excuse of "We've got to hit code complete, we'll tackle performance in an upcoming milestone" simply does not pass muster. Addressing performance concerns invariably becomes more difficult later in the software development cycle because as more code gets written interdependencies pile up and the necessary design changes become more difficult to make. When someone says, "We'll investigate and fix the performance problems later in the product cycle," what they are really saying is, "We don't understand the performance problems we have now and can't prove that we can fix them later. What we are building right now is a prototype that we might end up shipping anyway; we'll probably rewrite a huge part of the application when we build it for real."

Have specific and realistic performance exit criteria for your milestones. It is far better to delay exiting a project milestone than it is to crawl over the finish line with a limping application. Addressing performance problems in the milestones where the underperforming code was written will allow the maximum amount of creativity to be applied to solving the problems while options are still open to do creative things. The performance exit criteria should cover two areas: end-user responsiveness and absolute performance of critical algorithms.

Consider End-User Responsiveness


The user interface of your mobile application must remain responsive to people using the device it is running on. If your application exhibits stalls that leave the user waiting without any feedback, make explicit decisions about how these conditions will be handled.

Can the stalls be removed or sped up through better design?

Can the stalls be handled by bringing up splash screens or wait cursors that let the user know activity is going on?

Can the work be moved to a background thread to keep the user interface responsive?


Consider the Absolute Performance of Critical Algorithms


There will be some key algorithms that your application uses that have a disproportionate effect on the experience users perceive. These are algorithms that do things central to the application. They may do things like load data from a database, parse files, compute graphs for display on the device, or construct reports to show the user. For example, if it takes three minutes to draw a chart based on data the user has gathered, this is too long, even if the user interface remains responsive during the wait. For these key algorithms, care and creativity must be used to ensure an acceptable user experience.

Some questions to ask when analyzing critical algorithms include the following:

Can the algorithms be speeded up by tuning or redesign?

Can the algorithms' execution needs be predicted and front loaded before the user demands them so that the user perceives a better experience?

Can any of the heavy lifting be done off of the device by a server?

Can things be precalculated or predrawn at design time to remove or reduce the need for any runtime computation?


To sum up on milestones and performance goals: Having performance-driven exit criteria for your milestones and making them a disciplined part of your development process will make certain that your software development project stays on track.

Perform Code Reviews


Having performance-oriented code reviews of the application code being written is a great way to improve your code quality. Although design by committee is not a great strategy for building algorithms, peer review of code is a proven method of improving code quality. Code reviews offer two benefits:

Identification of improvements to algorithms
Both the act of preparing for a code review and the review itself help identify ways to write better performing algorithm code and design more responsive user interfaces.

Sharing of skills, lessons learned, and increased code familiarity
If the code being reviewed demonstrates some particularly novel approach to solving a difficult performance problem, this is a good learning tool for everyone participating in the code review. In addition, everyone will benefit from a deeper shared understanding of how the application's component pieces function.

To get the most out of a code peer review, it is important to have consistent coding standards that members of the group adhere to. Using common mechanisms for implementing state machines, resource caches, and user interface code will go a long way in bringing all the members of the team to a common understanding and allow best practices to be adopted and common mistakes to be quickly spotted and fixed.

It is recommendable to develop a list of standard guidelines that outline how code is to be written in your projects. This can be as simple as annotating a sample source file to show coding conventions or it can be a comprehensive design guide. What degree of specification is most appropriate depends on your organization's capabilities and needs. If no existing standards or guidelines exist, it is worth putting some together. Start simple, borrow liberally from existing published guidelines, and make sure the document is useful and used by members of your team. Consistency is its own reward.

Define Your Application's Memory Model


Because mobile applications run in constrained memory environments, it is valuable to specify and maintain an explicit model that describes out how memory will be used and managed by your application. In today's higher-level, object-oriented and garbage-collected computing environments, this usually does not require keeping specific track of how individual chunks of memory get allocated, although this is important in lower-level programming for things such as device drivers. Instead, it is important that your mobile applications have a defined model for what is kept in memory and for how long. As part of the design process, answer the following questions:

What global resources will be cached in memory?
Some things are useful to cache. Some things are wasteful to hold on to. Be crisp in your thinking about both.

How much application data will be loaded into memory at any given time?
Most applications deal with sizable amounts of data, only some of which needs to be loaded at any given time.

Under what circumstances will loaded data and resources be discarded?
Having a housecleaning model for getting rid of data and resources that are no longer needed is important to make room for other data and resources.


As a developer you have a choice to either manage these important aspects explicitly or have them implicitly grow until the application becomes unwieldy. These considerations were discussed in the earlier chapter on the usage of state machines to manage an application's memory model. If you have not read this yet, it is worth going back and reviewing this material. This will also be the focus of the upcoming chapter dedicated to performance and memory management.

Measure Often and Incrementally


Find ways to measure the important characteristics of your application. Measurement is the key to gaining the feedback that will guide your performance-tuning work. Algorithm execution durations and user interface responsiveness can be measured. Numbers of object allocations and memory usage can be measured. When you identify a performance problem, spend time thinking about what the most useful pieces of data you can get are and how to gather that data. A few suggestions in order of decreasing preference:

Use code instrumentation.
With only a little bit of effort, it is possible to add code to measure things in your application. This process is called instrumentation. If you can gather sufficient information through the simple instrumentation of your code, you will be in a good position to optimize your designs. Gathering timing information is usually easy to do; sample code is provided later in this chapter. Additionally, it is often possible to make native operating system calls to get higher-resolution timing data if it is necessary. Comparing different algorithms based on information gleaned from code instrumentation is often very enlightening.

Consider automated testing and logging of performance metrics.
When possible, it is useful to have automated tests that can be run to log performance metrics for key scenarios. If it is possible to instrument your code and generate consistent metrics that are measured and stored from build to build, this will be a great aid in tracking down performance problems when they occur. If consistent build-to-build metrics are available to compare a mobile application's performance in completing key tasks, it will be possible to quickly pinpoint when regressions occur. Quick detection of problems is the easiest way to isolate and identify the design changes that were responsible for any performance regressions.

Use runtime-generated metrics and device profiling tools.
Some managed code-execution environments offer the ability to get metrics from the managed-code runtime. These runtime metrics measure things such as code execution times, numbers of object allocations, and numbers of garbage collections. This information can be useful in tuning specific algorithms and finding cases where unexpected object allocations are occurring. The best strategy for doing this kind of analysis is to isolate the code you are trying to analyze as much as possible. If you can place the specific code you want to analyze in a separate project and run it to gather the metrics, you can get accurate and actionable data on the algorithm's performance characteristics. This kind of analysis is helpful in comparing the efficiency of different algorithms. If advanced runtime profiling hooks and analysis tools are available, these can also offer great insight into where the processing time is being spent. Note: See the section below titled "Getting Profiling Information from the .NET Compact Framework" for a description on how to get basic profiling metrics from the .NET Compact Framework.

Use algorithm metrics from desktop/server profiling tools.
Some terrific managed-code profiling tools exist for desktop and server runtime environments. The state-of-the-art in profiling tools for desktop and server code are presently considerably ahead of the state-of-the-art for devices; this is likely to remain the case for some time to come. Partially this is due to the greater maturity of the desktop and server markets, and partially this is due to the fact that desktop and server runtimes have larger memory budgets to support all kinds of profiling hooks. Expect device runtimes to mature and gain richer profiling capabilities but still lag their desktop and server counterparts. Desktop and server analysis tools can give in-depth views as to which functions are using the bulk of the processing time and where potential inefficiencies lie in your code. Additionally, today's profiling tools often integrate into standard development environments, making analysis much easier. Running your device code in a desktop or server environment will let you view how it performs on these platforms and may give you some insight into how they behave on devices. Note: Keep in mind that desktop, server, and device JIT, exception handling, and garbage-collection strategies are likely to vary significantly. Desktops and servers have much greater memory capacities than mobile devices, and different microprocessors excel at different kinds of calculations. Using desktop or server profiling results may bring useful insights, but some of the code's performance characteristics will differ when run on devices. Device performance will differ particularly if the code makes extensive use of file I/O, networking, or graphics because these will vary greatly on different hardware and operating systems. Desktop and server profiling tools are helpful to improve your intuition and understand where object allocations are occurring but the final proof is always on the device.

Use memory-usage data from the device operating system and device native code profiling tools.
It is often possible to make native operating system calls to get higher-resolution timing data and memory-usage data. It is also possible to use native code profiling tools to get code execution statistics. If you are building a native code application or component, this is exactly the data you want; go forth and measure. If you are trying to get information about a managed-code application, this can be tricky. When getting application or system memory-usage data from the operating system, be aware that a managed runtime's behavior may only roughly correlate to system memory usage. Because of things such as periodic garbage collections, your memory-usage chart is likely to exhibit a saw-tooth wave pattern as discarded objects build up and then are periodically freed by the garbage collector. This information can still be useful if you are trying to look for long-term memory leaks. Alternatively, you can preemptively call the garbage collector before taking a memory sampleonly do this during testing; manually triggering the garbage collector during normal application execution will almost always detrimentally affect the application's performance. Regardless, getting memory-usage data from the operating system directly may require native operating system calls and some tinkering. If you can get the data you need from the managed runtime, all the better.

Of these strategies, code instrumentation will give you the quickest and most valuable first-order results. When gathering metrics to analyze your performance, start with simple instrumentation and move on from there if needed.


Getting Profiling Information from the .NET Compact Framework


The .NET Compact Framework version 1.1 has a mechanism built in to it that allows the generation of "quick and dirty" execution metrics on managed-code applications it runs. It works at a granular application level by outputting a text file at the end of an application's execution. The text file contains data such as the following:

The total execution time

The number of objects allocated during code execution

The number of garbage collections during code execution

The number of exceptions thrown (Exceptions can be expensive if thrown often and needlessly inside loops.)


Because the metrics generated are application level, this kind of data works best if you are trying to compare two different algorithms or tune a single algorithm. The algorithms to be tested should each be isolated into their own executable files. The executables should have as little else in them as possible to guarantee that the algorithm being tested produces the bulk of execution activities. If appropriate, remove any forms or other application logic that can force unwanted components to get loaded, JITed, and run.

For technical instructions on how to enable the generation of these runtime metrics, refer to the article titled "Developing Well-Performing .NET Compact Framework Applications" at [http://msdn.microsoft.com].

As of this writing, the article could be found at the URL [http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnnetcom150/netcfperf.asp].

This article also contains many interesting performance comparisons for accessing functionality in different ways and is well worth a read.

A Measurement Tool You Can Use


Because designing for performance is an exploratory art, it is useful to have a few handy tools at your side to help you take quick measurements. The sample code below is intended to give a tool you can use to instrument your code.

The code in Listing 7.1 can be used easily and is small enough to include as part of your mobile application with little overhead. It is intended to be a general-purpose performance probe to enable you to take quick spot measurements of code-execution times. This will give a quick idea of how long code execution takes and enable you to identify trouble spots and areas where new design strategies may be needed. The code is also useful as a way to quickly compare two different approaches to see which is superior for your needs. For example, if it takes three seconds for you to set up your user interface and fill the data in a TreeView control, you might consider changing your algorithm to only fill in the top-level nodes and defer the filling of the child nodes until they are needed. The final proof is in the measurement and the end user's experience.

Listing 7.1. Performance Sampling Code to Instrument Your Code With



using System;
internal class PerformanceSampling
{
//Arbitrary number, but 8 seems like enough samplers for
//most uses.
const int NUMBER_SAMPLERS = 8;
static string [] m_perfSamplesNames =
new string[NUMBER_SAMPLERS];
static int [] m_perfSamplesStartTicks =
new int[NUMBER_SAMPLERS];
static int [] m_perfSamplesDuration =
new int[NUMBER_SAMPLERS];
//Take a start tick count for a sample
internal static void StartSample(int sampleIndex,
string sampleName)
{
m_perfSamplesNames[sampleIndex] = sampleName;
m_perfSamplesStartTicks[sampleIndex] =
System.Environment.TickCount;
}
//Take a start tick count for a sample
internal static void StopSample(int sampleIndex)
{
int stopTickCount = System.Environment.TickCount;
//The counter resets itself every 24.9 days
//(which is about 2 billion ms)
//we'll account for this unlikely possibility
if (stopTickCount >= m_perfSamplesStartTicks[sampleIndex])
{
//In almost all cases we will run this code.
m_perfSamplesDuration[sampleIndex] =
stopTickCount - m_perfSamplesStartTicks[sampleIndex];
}
else
{
//We have wrapped back around to zero and should account
//for this
m_perfSamplesDuration[sampleIndex] =
stopTickCount +
(int.MaxValue - m_perfSamplesStartTicks[sampleIndex])
+ 1;
}
}
//Return the length of a sample we have taken
//(length in milliseconds)
internal static int GetSampleDuration(int sampleIndex)
{
return m_perfSamplesDuration[sampleIndex];
}
//Returns the number of seconds that have elapsed
//during the sample period
internal static string GetSampleDurationText(int sampleIndex)
{
return m_perfSamplesNames[sampleIndex] + ": " +
System.Convert.ToString(
(m_perfSamplesDuration[sampleIndex] / (double) 1000.0)
) + " seconds.";
}
}

Note

The .NET Framework documentation claims that the interval of the .TickCount property cannot be less than 500 ms (0.5 seconds). In practice, I have found the resolution to be quite a bit better than this (under 100 ms, or 0.1 seconds). You will have to do your own experimentation. If you find you need a higher-resolution counter, you can modify the code above to make calls into the native code operating system and access lower-level system counters. For most cases, the code above should suffice, and its simplicity makes it very attractive to use when quick measurements are required.

Listing 7.2. Test Code Showing Use of Timing Instrumentation Code Above



private void button1_Click(object sender, System.EventArgs e)
{
const int TEST_SAMPLE_INDEX = 2; //Choose any valid index
//Start sampling
PerformanceSampling.StartSample(TEST_SAMPLE_INDEX,
"TestSample");
//Show the message box
System.Windows.Forms.MessageBox.Show(
"Hit OK to finish Sample");
//Stop sampling
PerformanceSampling.StopSample(TEST_SAMPLE_INDEX);
//Show the results
System.Windows.Forms.MessageBox.Show(
PerformanceSampling.GetSampleDurationText(
TEST_SAMPLE_INDEX));
}


Tips for Getting Good Measurements Results


As a rule of thumb, the longer the duration of the event you are measuring, the lower the margin of error. So if you are comparing two algorithms that take around 0.5 seconds to run, consider running them 10 times in a row and dividing the result by 10.

Repeat any experiment several times and make sure you are getting reasonably similar results.

When comparing two algorithms head to head, consider running each algorithm several times and throwing out the first measurement of each if it differs significantly from the other measurements. The first time your application runs code, it may force dependent libraries to be loaded and compiled. If both algorithms have similar dependencies, the first measurement will take the hit of loading and JITing this code. This JIT time should usually not be included in your comparative measurements.

For best results when comparing two different algorithms, restart the application between tests. Restarting the application will flush out all precompiled and cached code and give you a common starting point for each test.

As with almost any timing measurement, there are no perfect measurements, there are only good-enough measurements. It is only important to make your measurement much larger than the noise level that will always be present. No matter what you are trying to measure, there are always ways to tune your technique to get the results you need.

Test with Real Data Sizes


A common mistake developers make is to design and test their algorithm using smaller than real-world data. Real-world data can take longer to load and save and, as importantly, can take up more valuable application memory space and dramatically slow down the overall performance of a mobile application. Some typical examples where developers may develop and test with smaller sizes of data than end users will encounter include the following:

An application that uses a database is tested with 20 rows of sample data when in production 200 or 2,000 rows will be used.

An XML text file needs to be parsed as part of the application. A 15KB file is used instead of the 300KB file that will be used in production.

An application dealing with digital photographs is being designed. Photographs are downloaded from a Web server and cached locally on the device. During development, four sample 200KB pictures are used rather than the 800KB (or larger) digital photographs that are normally taken by digital cameras.


The mistakes shown in the examples above are understandable. When designing an application, data formats often change and it is much easier to remake a file with 10 rows of data than one with 200. It is also much easier to run and test applications when you do not need to wait for all the data to load at startup time. Sadly, the end users will not have this luxury; they will need to deal with real data sizes. Therefore it is important to move to real-world data or simulated data of a representative size early in a mobile application's design and development process.

Because there is a tendency to continue to work with small-sized test data as long as possible, it is easy to forget to switch to real data until too late in the development process. Often real data is only used in the field tests of a near-complete application. By this time, all kinds of dependencies will have built up in between code modules, and implicit capacity assumptions will have been made in the mobile application's design. It will be painful if not impossible to untangle these dependencies and change the application's underlying data and memory models to account for real-world data sizes. To ensure that you move to real-world representative data sizes, make it an exit criteria for the milestone in which the code dealing with the data is written. Specifically, make it a milestone exit criterion to have switched to using real data sizes in your daily development and testing. Acceptable performance with real-world data should be a criterion to successfully exit every milestone.

Stress Test Your Application


Popular and useful applications have a way of growing beyond their original intended uses and capacities. People commonly use and abuse all kinds of equipment beyond stated tolerances. Your application should expect this to occur. It is advisable to do some simple stress testing to see how your application scales when that data it works with grows to the following:

20% bigger file/data size than the design specified
This represents basic growing room for your application. Your application should be able to handle this with no problem.

50% bigger file/data size than the design specified
This represents plausible overuse of your application. Does the performance degrade gracefully or do dire effects occur?

100% bigger file/data size than the design specified
Major overuse.

200% bigger file/data size than the design specified
True stress testing.


Ideally your application should be able to gracefully manage these increased data sizes. If you have a good memory model that ensures only the proper amount of state is kept in memory, the user may not even notice any degradation. More realistically, your application will experience some kind of reduced performance when working with larger data sizes and the operative question becomes "What is the acceptable range?" It is important to understand in what range the application's performance is linear and at what point it will start to encounter exponential performance reductions.

If your application starts to fall over at any of these stress points, you should consider placing safeguards in your application that guard against these conditions. For example, you could write checks in your code that explicitly disallow working with data sizes larger than allowed for acceptable performance. Alternatively, your mobile application could warn users that loading larger amounts of data will cause significant performance deterioration and give them an estimation of the effects of their requests to work with larger sizes of data.

Your design document should both state what the expected maximum data size is and what should happen when the user attempts to exceed capacity thresholds.

Never Put Off Performance Work (It Will Always Get Worse!)


I have stated this before and I will almost certainly state it again several times in this book: Do not put off performance work! Putting off performance work is like putting off fixing difficult bugs; it almost never pays off. It is easy to convince yourself to defer this work. Let me help out with a few common excuses I like to use:

I'm working towards code complete. After code complete, I'll have a better idea of how the whole application works and be able to tune it then.
Wrong. After you hit code complete, you will have a very difficult time reengineering parts of your application due to explicit or implicit assumptions you have made in your algorithms. The more code you write, the harder it is to change it. If you hit a performance problem that will affect the user's experience with your application, you should figure out how to fix it while the code is still malleable.

No need to worry about code efficiency now, I'll just use whatever coding techniques I'm most familiar with and take a few shortcuts. Later I'll find the slow algorithms and figure out how to rewrite them properly.
Wrong again. If you are writing needlessly wasteful code, you should be fixing this before you write your algorithms. This is particularly true for code that will perform many string operations or algorithms that allocate objects in loops. It is often not harder to use the efficient string handling or object allocation techniques (more on this later); it just requires a little more investigation up front to find out what the efficient techniques are for the programming system you are using. There are two important steps to writing code that performs well: (1) Choose the right algorithm that meets your needs. (2) Code the algorithm using the efficient techniques available to you. Writing needlessly poor-quality code with the idea of fixing it up later is like trying to rapidly paint a house by splashing paint on all the walls: You can get 80 percent of the house painted quickly, but you will end up painting it all again before you are done.

Note:
It is important to note that this does not mean that you should spend endless hours handcrafting each and every algorithm; this is equally bad and can produce unmaintainable code that is needlessly micro-optimized. Some parts of your application matter a great deal more than others, and you need to assess and measure what is important and concentrate your special efforts there. What it does mean is that it is worth understanding what the efficient coding mechanisms are for working with strings, arrays, collections, object and type allocations, sorting algorithms, common data types and operations, and getting into the habit of writing good-quality code (or at least not arbitrarily bad-quality code!) Always take the time to write good-quality code and then measure the different parts of your application to find out what the most critical systems are to ensuring great performance. After you have done this basic good design work, you can then spend your valuable time optimizing the most important parts further.

Performance is just a matter of tuning the algorithms I have already written.
Wrong a third time. Although good gains can be found sometimes by fixing individual algorithms, other times fundamental redesign is required. Unless you write terrible-quality code, the greatest performance gains may come not from optimizing the algorithms you have written, but in restructuring your application's data model and user interface in fundamental ways that allow it to perform drastically better. It is important to know when to revisit your fundamental designs and look for creative alternatives.

Some performance problems may be based, not so much on how you process data, but rather on how much data you need to process at any given time. No heavy lifting will be required of your algorithms if you are working with smaller amounts of data or data that is already presorted or organized in the way that the application needs it. These kinds of fundamental optimizations can be at the very heart of your application's data model, state machines, and user-interaction model. After you have built a great deal of code on top of these models, you may be able to tune the code in the core algorithms to get some incremental gains, but you will have a difficult time making fundamental changes to the way your application relates to the data. Great application performance is systematic and based on an efficient overall design, not just isolated in individual processing algorithms.

Better performance is just not possible.
I have reached the limit of what can be done with my application and we'll just have to live with the slow performance. Dead wrong. Better performance is always possible; it may just require a radical rethink. You may need to change how your application works. You may need to move code off the device and onto a server (or the other way around). You may need to precalculate things at design time, build optimized lookup tables, and place them into your code. You may need to split your single application into three smaller applications. There are almost an endless number of ways to "cheat the clock" and find ways to raise your application's performance. Although it is true that there are often fastest-possible algorithms for accomplishing any single task, your application is almost never a single algorithm or task. Your application is a user experience. If you cannot find a way to get acceptable performance and are confident that the performance metrics you are using are the right ones, it is time to think more creatively. To quote and old truism, "Necessity is the mother of invention." Often the greatest insights are made when faced with these kinds of challenges, because they force you to think about the true nature of the problem you are trying to solve. Poor performance is generally due to a failure in imagining a creative solution to a problem that has not been thought of before. Stand on your head for a while and dwell on it or do whatever it is you do when you need to think creatively. Something will pop up.


Tackling performance problems as they arise enables you to confidently move forward to the next stage of your application's development. If something goes drastically wrong with performance down the road, you can backtrack to a known point where the performance was good and work from there. Fixing performance problems along the way is a lot like clipping into a safety line when you are climbing and reach a new point in your progression upward; you can only fall so far past the point you last clipped in. If you do not take the time to consolidate your gains, you can fall all the way to the bottom and that can hurt a lot. Do not go climbing recklessly ahead with your application's development without stabilizing your application's performance.


/ 159