Writing Mobile Code Essential Software Engineering for Building Mobile Applications [Electronic resources] نسخه متنی

اینجــــا یک کتابخانه دیجیتالی است

با بیش از 100000 منبع الکترونیکی رایگان به زبان فارسی ، عربی و انگلیسی

Writing Mobile Code Essential Software Engineering for Building Mobile Applications [Electronic resources] - نسخه متنی

Ivo Salmre

| نمايش فراداده ، افزودن یک نقد و بررسی
افزودن به کتابخانه شخصی
ارسال به دوستان
جستجو در متن کتاب
بیشتر
تنظیمات قلم

فونت

اندازه قلم

+ - پیش فرض

حالت نمایش

روز نیمروز شب
جستجو در لغت نامه
بیشتر
لیست موضوعات
توضیحات
افزودن یادداشت جدید











Think Devices!


Next to your mobile application's performance, the application's user interface will most determine the quality of experience users have with your application. Having an intuitive, responsive, reliable, and good-looking user interface makes a big difference. As with other creative endeavors, designing a good mobile application user interface requires both creativity and discipline.

Creativity is required to find novel solutions that enable you to present your application's functionality on the relatively small canvas offered by mobile devices. Desktop displays are becoming enormous in both physical dimensions and effective resolution. Mobile devices are bounded by their environment; they must be easily carried around by users and used unobtrusively so that others in the surrounding environment are not disturbed. Conveying the information and interactivity you want to deliver is an exercise in both distillation and creative organization of information. In addition to the challenges of fitting the needed information onto a mobile device screen, there are human and social factors to keep in mind. Think of how frequently today a mobile phone's ringing interrupts the flow of a conversation or meeting, or disturbs others in public areas. Think of what would happen if everyone in a room tried to talk on their mobile phones at once; the result would be an incomprehensible cacophony. (Sadly, this is no longer a theoretical problem!) Think of what would happen if everyone in a crowded elevator or subway needed both hands to access their schedule, business, or personal information stored on a mobile phone; elbows would be everywhere and tempers would flare. It is easy to work with one hand in a crowded space, much harder to work with two. Both the information that needs to be conveyed as well as the context in which the application is being used should have a significant impact on how the user interface is designed. These are problems that need to be solved creatively.

Discipline is required to maintain consistency in user interface design. One of the hallmarks of a poorly designed mobile device user interface is inconsistency of experience through the flow of the application. Forcing the user to change focus to different parts of the screen or press different physical buttons to travel in a straight line through the application's user interface is bewildering to users. As with many other aspects of mobile device design this problem is not unique to mobile devices but is exacerbated by the fact that devices offer a constrained and concentrated view to users; this makes awkward navigation harder to work around for people using the application. Keeping a disciplined approach that ensures consistent and simple navigation through your application is important.

Discipline is also required in the realization of the user interface code to ensure that the application's code stays flexible and open to experimentation and refinement. User interface code has a way of becoming messy and tangled. Successive small tactical changes piled upon one another, each to solve an immediate problem, have a way of making the code base fragile and resistant to change. Discipline is required to keep the user interface code from becoming a single system of intricate and interlocking systems. Instead the goal should be to define and realize the user interface as a robust set of discrete states. The implementation of each user interface state should be insulated enough from other states as to allow iteration on its design without destabilizing the other states. As recommended earlier in this book, an adherence to a state machine approach when defining and implementing a mobile device's user interface can pay rich dividends when experimenting with, refining, and maintaining user interface code. A disciplined coding approach does not decrease flexibility, it increases it.

Mobile device user interfaces are fun. They can be challenging and they require new ways of thinking and problem solving. For developers coming from the design perspective of desktop- or Web browser-based applications, mobile device user interfaces offer unique challenges that must be mastered. As the mobile application's designer, you must get your head into the mindset of mobile device thinking.

One Size Does Not Fit All


Only a few years ago, the concept of "write once, run anywhere" was all the rage; we were all going to write applications that seamlessly ran on different desktop operating systems, transferred down to our cell phones, and ran on our wristwatches without modification. Presently this concept seems about as useful as the concept of a "universal shoe"; substandard in all cases, if at all possible.

Think about how ridiculous it would be to have one universal shoe that was meant to be worn on feet of any size and used for all purposes. The same shoe for the foot of a child aged six as for a grown adult and for all activities ranging from ice climbing to ballroom dancing. I have tried to dance in ski boots and can attest to the fact that it does not quite workwhether I can dance even given suitable footwear is still a matter of some debate and a question beyond the scope of this book. In any case, the concept of the universality of a user interface is broken for two reasons: (1) Devices come in different sizes and shapes, and (2) different device classes are used for different optimized purposes.

The fact that devices come in different sizes and form factors has a great practical impact on the utility of any given user interface on a particular device. For example, a smart phone user interface is typically significantly narrower than a Pocket PC's. Both of these are vastly narrower than a tablet computer form factor. These differences are not arbitrary, they have a lot to do with how the devices are meant to be carried (for example, pants pocket, jacket pocket, backpack, or briefcase) and under what circumstances they are intended to be used. The dimensions of the device's screen will have a significant influence on the information layout you choose. Importantly, input mechanisms differ from device to device as well. Devices such as smart phones have an extended 12-key telephone keypad and no concept of a screen pointer, whereas PDA devices tend to have touch screens as their primary input mechanism. Some devices use a touch screen for input, whereas other devices forego a touch screen in favor of a rugged read-only display that is more durable and will not break when placed in your backpack with a set of keys and crammed under an airplane seat for takeoff. Still other devices have a full keyboard and a stylus to allow work while standing or sitting at a desk. The variety of different sizes and input mechanisms is large, and it is virtually impossible to come up with a generic user interface model that works well for all of them. Runtime models that attempt to dynamically adapt a user interface based on the abilities of the target device generally result in a low quality of experience because they cannot discern what the most important aspects of the application user interface are and how best to express these richly on the target devices they are run on.

Each class of device has an optimized set of purposes and usage models it was designed for. Mobile phones are used primarily for making phone calls, viewing previously entered information, and entering small amounts of information consisting usually of sentence or two of text or simple numeric input. Pocket PC types of interfaces can offer a much larger display capability for exploring information, but because most do not come with a built-in keypad they are not suitable for entering free-form text. A Pocket PC Phone may be reasonably used for making phone calls, but if its user is using it only as a phone he has probably chosen the wrong device for his needs. Tablet computer form-factor devices may often be used while standing because they allow free-form entry of information with a stylus, but typing is difficult in this position. Laptops are well suited for typing in or exploring information but not for making quick phone calls or instantaneously retrieving information. This is because of their longer boot-up time as well as because of the way they are carried; you cannot take a full laptop out of your breast pocket, turn it on, and dial a number in eight seconds; nor can a laptop stay on indefinitely to receive phone calls, due to battery-life considerations. It is important to match your application's user interface to the overall gestalt of the device it is running on.

Although universal application portability is not possible in a useful sense, reuse and synergy are possible between device classes. Often core application logic can be shared between different implementations that are customized for target mobile devices. It may make sense to abstract common application logic into binary components that are shared by different device-specific implementations or it may be more efficient just to reuse the same source code in the different device projects; the choice is yours. In contrast to core application logic, user interface code is another matter. To have a rich device experience, you should plan on having a customized user interface for each device class you intend your application to run on. Plan on having customized implementations tuned to the physical strengths and constraints of the devices being targeted, tuned to the usage models for those devices and conforming to the navigation metaphors offered by each of the specific device models. There is no such thing as a one-size-fits-all "universal shoe," and software is more complex than footwear. Plan to specialize!

One Hand or Two?


An important characteristic for your mobile application is determining whether it is intended to be operated using one hand or two hands. Usually this choice is coupled with the choice of mobile device hardware for your application.

For example, if the application is intended to be run on a smart phone, your application will have to keep one-handed operation in mind as a specific design and testing goal. The flip side of this decision process is if your application usage scenarios require one-handed operation you should choose a device that is centered on a one-handed usage paradigm. Single-handed operation means input of information and application navigation using the same hand that holds the device.

Minimizing the number of times users will need to switch buttons as they navigate through the application is important for successful one-handed operation. For example, if your application presents the user with a five-step process, it should be possible to navigate this user interface by pressing a single physical button five times if the user desires the default values. Having to switch buttons requires the user to switch his or her visual attention from the screen to the physical buttons on the phone. It is remarkably distracting, breaks the user's stream of thought, and increases input error. Picking the right default values so all the user has to do is affirm them is also important to increasing usability. Reducing the number of button clicks required to accomplish a common task to an absolute minimum is also important; it reduces the possibility of error and reduces the amount of time users spend accomplishing the short tasks they typically perform with mobile devices. When the overall session time with a mobile application is around 20 seconds, shaving off a few seconds by having a lean and efficient navigation model makes a big difference.

Good single-handed usage design requires paying close attention to the navigation metaphors present on the target mobile phone. For example, the tab-dialog metaphor is usually not used for single-handed application navigation because there is not enough space to display all of the tabs on most single-handed devices. There is also no good way to navigate different tabs on a single-handed, nontouch screen display; to use tab controls a device is typically held with one hand and its screen is tapped with a stylus in the other hand. Instead of using tab controls for navigation, smart phone user interfaces that are intended for single-handed navigation between multiple screens often display choices as a series of lists and have a button that navigates the application back to the previous screen. Forward navigation works by pressing number keys representing numbered list choices, and backward navigation works by pressing the back key. Users will become confused and frustrated if the device-specific navigation metaphors do not work as expected in your application.

Another significant factor in application navigation is how users view the device as a whole. When using a smart phone, users tend to view the whole device as "one application" more than they do on Pocket PCs or Tablet PCs. On smart phones, there is much less of a concept of starting applications or switching to applications; instead the user perceives only navigating to different screens on the device. This blurring of application boundaries makes adhering to common navigation models even more important. As a general rule, the smaller the device, the more the user will think of it as a single application and expect consistent navigation through the entire device.

In contrast to the smart phone, a Pocket PC is designed for two-handed operation. One hand holds the device, and the other is used to navigate and make decisions. If your application is intended to run on a Pocket PC type of device with a touch screen and stylus for input, you will want to design for optimal usage of the form factor's input and output mechanisms. As noted previously, the choice of device may be dictated by the user experience you need to enable with your application.

When working with touch-screen based devices, there is a need for careful consideration of the layout of user interface elements. It is important to ensure that users working with stylus in hand do not obscure important parts of the screen as their hand hovers over the device to make selections; this problem does not exist in smart phones because one-handed operation ensures that the screen is always in view. In contrast to smart phone user interfaces, the tab control metaphor is often a very good user interface model for Pocket PC applications because the screen has ample space to display tabs for navigation and the touch screen allows for quick navigation between the tabs.

You should decide whether your application is one handed or two handed. Sometimes the choice of hardware is predetermined; other times the choice is part of the software design. Regardless, after a target device is chosen, your choice of single- or two-handed operation is also chosen. It is important to make this decision explicit and to enforce it in your user interface design and testing process.

Smaller Screen Real Estate and the Increased Importance of Navigation


As noted earlier in this book, mobile device applications are used frequently but in short spurts; contrast this with desktop application session times, which tend to be much longer. Because the session length for mobile applications tends to be short, users need to be able to navigate more quickly to the information they want to access. Generally speaking, the smaller the device, the shorter the session times and the higher the requirement for quick navigation.

Applications running on small screens require navigation to display information that a large screen can display at a glance. Table 13.1 contrasts a desktop display, a Pocket PC-sized display and a Smartphone display. Most desktop displays today will easily exceed 1024 x 768 pixels, offering a large amount of screen real estate to show information to users.

Table 13.1. Relative Screen Areas of Different Devices

Device Type

Typical Resolution

Number Pixels

Relative Size

Desktop/laptop

1024 x 768

786,432

100%

Pocket PC

240 x 320

76,800

9.77%

Smart phone

176 x 220

38,720

4.92%

A typical Pocket PC display has less than 10 percent of the screen area of a low- to mid-resolution desktop or laptop display. A smart phone has 5 percent of this resolution. This is not as dire as it seems for three reasons:

Desktop displays tend to use more buttons, bigger pictures, and more screen real estate to convey information than their mobile device brethren. There tend to be more controls on any given information screen, and groups of controls tend to be spaced farther apart from one another.

Devices offer a more concentrated experience and forgo many of the general-purpose features present in desktop applications in favor of focusing in on what users want in mobile scenarios.

In reality, a human being can only concentrate on a small part of a large screen at once. This means that the amount of information a person is usually working with at any given time is relatively small. It only appears that we see the whole screen fully at any given moment; in reality, our eyes are constantly glancing around to take in different parts as needed. It does mean, however, that navigation between screens of information will be a more common task on mobile applications.

A useful metaphor is to think of your application as a work of writing and to consider how this would be reflected on different devices:

A desktop application screen is capable of expressing several related paragraphs of information. Each paragraph explores a different idea and is represented on a part of the screen with its user interface controls being analogous to the sentences within a paragraph. Although users cannot take in all of the different paragraphs of information simultaneously, they can easily and often subconsciously switch their visual attention from one "paragraph" to another.

By analogy, a Pocket PC application screen is capable of conveying a single paragraph of information at any given moment. The paragraph is broken into six to eight sentences, each being analogous to controls laid out on the screen. Navigation between different paragraphs is accomplished by tabs at the bottom of the screen. When designing a Pocket PC-sized user interface, it is important to divide your functionality into logical paragraphs, understand which paragraphs are most essential and should be presented first, and understand how the user will navigate between the paragraphs using the tab control. Ideally, the user can, at a glance, see the detailed contents of one paragraph of information and the outline of what all the other paragraphs contain. The user must make a conscious decision to switch between different "paragraphs" of information, usually by selecting a TabControl tab on the bottom of the device's touch screen.

A smart phone application's screen is analogous to conveying several sentences of information at any given time, perhaps one to four sentences. This is enough information to express a short paragraph, but often it is necessary to break up a larger "paragraph-sized" concept into two related screens of information. The navigation metaphor is either one dimensional, moving forward or backward through the application's screens, or explicitly list based. The user interface switches between one of two modes:

Details mode
The user can see a short "paragraph's" worth of information that allows navigation forward or backward to the adjoining paragraphs. This is a typical application screen on a smart phone.

Outline mode
The user is presented with an outline list of paragraphs to choose from that can be navigated to. This is a list navigation screen on a smart phone.

The smart phone user interface size does not support displaying both the outline and the detail at the same time. When designing a smart-phone-sized user interface, it is important to think hard about the individual sentence-level concepts you want to convey, which sentences need to be on the screen at the same time, which sentences are most important and should be listed first, how to navigate between the different screens, and when to offer outline views.


Lists or Tabs?


Lists and TabViews represent two common user interface navigation metaphors for mobile devices each with their own strengths.

Smart phones use lists to present multiple simultaneous navigation choices. By definition the list is a one-dimensional series of choices. Lists are a good user interface metaphor for devices with relatively small and narrow screens, particularly if the devices have numeric keypads that can map physical numbered buttons to choices on the screen. A user can relatively quickly navigate lists by viewing the options on the screen and pressing buttons. Quickly users will remember the key combinations for navigating a shallow series of lists that represent common tasks. This kind of navigation works a lot like a voice menu on a telephone, but is faster because it is visual rather than auditory; users have to keep pressing keys to navigate menus until they end up where they want. It is important to keep the lists predictable and relatively shallow.

In contrast to the smart phone's one-dimensional model where the user navigates menus to get to interesting screens, Pocket PC-type devices have enough screen real estate to display both one page of interesting user interface and navigation options at the same time. It is for this reason that Pocket PC applications often use tab controls to display an application's user interface. When using tab controls, you are effectively reusing the same screen real estate over again for each tab. This can lead to a lot of controls and event handlers inside a class with a lot of confusing and interrelated code. A useful way of managing this complexity is to create a new class file for each tab on the tab control and have any event handlers for controls on the tab call into the class to do the required processing; this creates a helpful encapsulation that keeps the tabs insulated from one another. Figure 13.1 shows a tab control interface for navigating the functionality in a Pocket PC scientific calculator; each tab offers a related chunk of functionality and users can navigate between different tabs easily as their needs dictate.

Figure 13.1. Tab controls allow a great deal of functionality to be exposed in logical chunks.

[View full size image]

Mobile Phone User Interfaces and the Importance of Consistent Click-Through


When working with mobile-phone-sized user interfaces, it is important to ensure that navigation does not require the user to switch buttons to accomplish common tasks. It is extremely distracting to be navigating a user interface with a button on the upper-left side of the phone and abruptly need to find and press a button on the upper-right side to continue the natural flow of the application; this is particularly true when the application is being held and operated in a single hand. This simple concept of one-button navigation cannot be overemphasized because it is so often disregarded resulting in a needlessly awkward and frustrating user experience. Smart-phone-type user interfaces often follow a straight line, one-dimensional navigation with one main button meaning "Okay, go forward" and another meaning "No, go backward." Pay careful attention to the specifics of the navigation model on the mobile phone your mobile application is targeting; design and test aggressively to ensure that your application follows this model.

Touch Screens and the Importance of Big Buttons


A common mistake in building touch screen user interfaces is to make controls too small. The practical consequence of this is that users have a hard time pressing the right buttons or entering accurate data when using the mobile device application. This causes a great deal of user frustration because users feel they are not coordinated enough to use your user interface. Do not make your users feel clumsy! Several reasons contribute to small user interface control problems:

Design and testing using a software emulator
When running in a desktop-based emulator, it is easy to see where the mouse pointer is and exactly where clicking will occur on the device's screen. It is significantly easier to accurately operate small user interface controls using a mouse on an emulator than it is using a stylus on a physical device.

Display and touch screen parallax and calibration inaccuracies
The touch screen's surface usually sits some small but not insignificant distance above the physical display elements. Depending on the angle the device is held at by users and the angle of the stylus they are holding, parallax inaccuracies can occur between where users think they are clicking and where the click is registered. Different devices exhibit this problem to different degrees, and ruggedized devices that go to extra lengths to protect their screens will tend to have an even more pronounced parallax effect. Inaccuracies can also be introduced if the touch screen has not been calibrated accurately enough. This means that the stylus is less accurate than you may think.

Real-world usage
It may be possible to click a small button accurately in your office, but what if you were on a bus, in a taxi, walking down the street, or sitting in a train? Using mobile devices in the real-world subjects the user to all kinds of distractions, vibrations, and short but sudden accelerations that make working with small controls on a touch screen more difficult.

A desire to"fit it all on one screen"
Trying to fit too much information onto a single screen means creating a crowded environment and shrinking controls. The higher the density of controls, the greater the probability they will be inaccurately selected.

Reliance on built-in software input mechanisms
Devices such as the Pocket PC offer a pop-up software keyboard that the user can type into. This is a useful feature, but it is a general-purpose feature and because of this it has a lot of keys it needs to display in a small space. If you have more specific needs, you should optimize the user interface's input mechanisms.


Figure 13.2 shows the same application using the software keyboard (a.k.a. SIP or software input panel) and a custom set of buttons for common inputs. Although the software keyboard offers a fairly good general purpose mechanism for typing in letters, numbers and symbols, a dedicated set of larger user interface controls designed specifically for scientific calculator usage offers a more accurate and easier-to-use interface. The generic software keyboard is valuable for limited general-purpose input, but it can and should be improved upon for more task-specific input. When designing a touch screen user interface, you should make the controls as big and purpose dedicated as practically possible.

Figure 13.2. Comparing different kinds of user interface input mechanisms.

[View full size image]

Optimize for Common Data Entry


Whenever possible, it is a good idea to help the user fill in common input quickly and accurately. Because mobile device users rarely have a full-sized keyboard with which to input data, extra effort is warranted in helping them. A great example of this is a date picker. One way to input a date is to require the user to manually type it in letter by letter (for example, January 6, 2006, or 6/1/2006; this is time-consuming and prone to input errors as well as internationalization issues due to varying date formats. A better way is to give the user list boxes for day, month, and year and save them the need to type the data in. A better way still is a pop-up control or dialog that displays a calendar that they can quickly pick from to fill in the date field. (Note: Given the common need to enter dates, it is likely that future versions of the .NET Compact Framework will add a calendar control, but the general problem of rapid input of complex data remains.)

A mistake that is sometimes made when designing user interfaces for mobile devices is to try to save screen and program space by using TextBox controls for complex data input; this forces the user to manually type in the complex data such as dates. This kind of "efficiency" is a Pyrrhic victory at best; screen real estate and program size used for saving the user time and increasing accuracy is space and effort well spent.

Figure 13.2 demonstrates this concept by having both buttons and drop-down list boxes to help the user enter data into a scientific calculator. Instead of requiring the user to hunt-and-peck the letters sin () for a trigonometric function, he or she can simply select the function from a drop-down list box. Common variable names x, y, and t are represented as buttons on form along with other common mathematical symbols. Complex mathematical formula input with this interface is much faster than manual entry via a generic onscreen keyboard. Further optimizations to this user interface are doubtless possible.

Ensure That Redundant Manual Input Exists for Automated Input Mechanisms


Special-purpose mobile applications often use custom hardware to accelerate data input. A good example of this is a bar code scanner plugged into a mobile device to allow for the quick entry of bar code data attached to physical objects. If a mobile application has to interact with the physical world, having a bar code scanner or even speech-recognition support are both potential ways to raise the usability of the application and the productivity of its user. When possible and valuable, these kinds of real-world input mechanisms should be explored and used. There is, however, a danger in relying on them exclusively. Physical bar code labels or readers may become dirty or damaged, and speech recognition may have to deal with noisy environments and day-to-day voice irregularities that diminish their accuracy. For these reasons, it is important to always have a manual fallback that allows the application's input to be manually keyed in when more automated mechanisms fail. For the same reason that supermarket checkout counters allow the cashier to manually enter the bar code number of an item if the scanner repeatedly fails, your state-of-the-art mobile application should have a dedicated user interface that allows fast entry of manual data when automated input mechanisms fail. An application that works great 90 percent of the time and is worthless 10 percent of the time is not a good or reliable mobile application; an application that works great 90 percent of the time and has a decent manual workaround for the remaining 10 percent is.

Emulator and Physical Device Testing


Software emulators are wonderful things. They enable you to quickly design, test, and debug your application without needing to worry about setting up physical devices, switching your attention from the computer to the device, or many of the other hassles that working with an additional piece of hardware implies. Software emulators are even great for demonstrating your application; their images can easily be projected onto a large display, and it is possible to travel with many different emulator images stored on your laptop, saving you from carrying a tangle of wires and a suitcase full of electronics. (Airport security loves this.) However, what emulators are not good for is testing the performance of your application or testing the usability of your mobile device application's user interface. For this you need to test your application on physical device hardware. Despite your best efforts it is impossible to faithfully test the usability of your application using an emulator. Here are a few reasons why this is so:

Emulators are not physically held.
Most mobile devices are operated by holding them in your hand and either operating them with your thumb (one handed) or with your other hand. This simply cannot be done with an image on a screen.

The desktop/laptop mouse and keyboard allow you to cheat.
Typing letters into a text box using a keyboard is not the same as using a 12-key telephone keypad to enter data. Clicking with a mouse is not the same as pressing on a touch screen.

The size of your hand is not represented on a computer screen.
User interface layout cannot be properly gauged when using an emulator because the mouse cursor is small and does not obscure the screen when you hover over a button. The mouse cursor is not physically connected to anything. In contrast, a touch screen stylus is large and is physically connected to your even larger hand; it does obscure a large portion of the screen when you reach over the screen to press a button.

Desktop and laptop pointing is more accurate than device pointing.
Laptop screens offer a flat surface that displays the pointer on the same screen enabling you to physically see where you are about to press. A mobile device with a touch screen does not display a mouse pointer. Where the "click" occurs when you tap on the screen will be approximately where the user thinks it should be, but will be affected by such things as parallax that is dependent on the user's angle to the screen, the physical separation between the touch screen and the display elements, and the calibration of the touch screen. In practice this means that there is a physical limit to the size that a user interface element can be before users keep missing it when they tap on the screen.

An emulator can easily be reset and is not used for other purposes in between testing your application.
An emulated smart phone is not the same phone you are taking phone calls on and keeping your appointment calendar on. The fact that your physical device is often not single purpose but instead has other functions that the user will rely on it for is important. You will need to make sure that your application behaves well when the device is being run 24 hours a day and 7 days a week as well as understand what the effects of other device applications are on your application. This kind of real-world usage cannot be accurately simulated.


Keeping the points listed above in mind when designing your mobile device application is important but is not a substitute for testing on actual hardware. Fundamentally, the only way to really test the user interface of your mobile application is to test it on the hardware on which it will be running.

Figure 13.3 shows an example of how user interface usability can differ on an emulator versus on a physical device. The example shown is a foreign-language vocabulary teaching game where an animated character moves around the screen based on a user's correct or incorrect answers to multiple-choice questions. When using a device emulator to design and test the application, all appears well; the choices are laid out neatly on the screen and navigation is simple. Testing on a physical device, however, reveals some significant usability issues. As a user moves his or her hand over the screen to select an item, it obscures the question as well as the game's play field. This means the question cannot be viewed while the user is selecting between various answer options. In addition, it means that a user must move his or her hand away from the screen to see the results of the choice they have made; if they do not move their hand quickly enough, they will miss the game play that occurs on the screen and thus some of the enjoyment of the application. Clearly this is not an optimal situation, but this only becomes obvious when testing on a physical device, because on an emulator screen running on a PC the screen is not blocked by the user's physical hand.

Figure 13.3. Contrasting user interface usability on an emulator vs. physical device.

[View full size image]

Figure 13.4 shows two possibilities for improvements to our handheld application's user interface. Both possibilities give users a better view of the play field as they make selections while holding the device in their hand. At first glance, the screen on the right seems superior because it allows the viewing of the question while selecting from the options; in real-world usage, however, this may prove not to be the most important factor. Issues such as where users rest their hand between choices and the overall physical balance of the device while held in a user's hand are important factors that will determine the optimum design. In this case, after a fair amount of testing I selected the view on the left for just these reasons.

Figure 13.4. More usable alternative screen layouts based on physical device testing.


/ 159