بیشترلیست موضوعات • IndexWriting Mobile Code Essential Software Engineering for Building Mobile ApplicationsBy
Ivo Salmre Publisher: Addison Wesley ProfessionalPub Date: February 01, 2005ISBN: 0-321-26931-4Pages: 792
توضیحاتافزودن یادداشت جدید
Multitasking and Multithreading in Modern Operating Systems
Today's modern multitasking operating systems allow a microprocessor to be used as a shared resource. The microprocessor's time gets spilt between different tasks, all of which get to pretend that they are sole owners of this resource. This is known as multitasking and the tasks being performed are known as processes. There are probably several tasks already running concurrently on your mobile device at any given time. This number is probably more than you would guess. Some of these tasks are serving low-level needs that you would not consider "applications," and some are running as familiar application software. The operating system occasionally interrupts a task in the middle of what it is doing and passes control over to another waiting process or thread. This works well because most of the time applications are not doing much; they are usually waiting for external input to handle. In contrast, if each application process used all its allotted processing time to calculate Pi to its infinite lengths, the overall system performance would suffer greatly. Multitasking works because the microprocessor is an underutilized resource most of the time.The subject of how to fairly divide time between different processes and threads is a deeply specialized topic that would fill its own book. Suffice it to say that processor time is divided reasonably fairly between the different tasks that vie for it.Switching control between different processes involves what is commonly referred as a context switch. Because each application process gets to pretend that it is in sole ownership of the microprocessor, all kinds of data needs to get swapped in and out when execution control gets handed from one process to another. Microprocessor registers need to get their values swapped out, the program counter needs to get swapped out, virtual memory pointers need to get moved around, and if there is a pipeline of instructions queued up or a memory cache associated with the microprocessor, these too need to be dealt with. Context switches are not cheap. The more processes your device has or the smaller the time slice size that gets allotted to each process when it is its turn to run, the larger the percentage of total time that gets spent switching from one process to another.Because operating system process context switches are so expensive, most modern operating systems support multithreading as a more lightweight form of multitasking. Multithreading enables you to support multiple threads of execution inside a single process. Switching execution context between different threads is generally less expensive than switching process contexts. But switching thread contexts is not free either. Some overhead is required to change the execution address, swap out register values, and perform other necessary bookkeeping to keep things running smoothly.Allowing multiple streams of execution inside the same application's memory space greatly raises the potential complexity of the application's code by throwing out time determinacy of execution. If two threads are trying to access the same areas of memory at roughly the same time, unintuitive and complex situations can arise. This is true with native C/C++ code and also true when working with managed code. To deal with this, the concepts of locks, mutexes, semaphores, and critical sections exist; these enable you to create sections of code that are not re-entrant. This is similar to a multilane highway having a section where traffic merges down into a single lane. Again, a fairly large book could be devoted to detailing the intricacies and pitfalls of parallel code execution.In the end, all the different processes and threads compete for the single microprocessor's time (or in the case of a multiprocessor system, for a pool of processors' time). Each process has at least one thread and many may have several. The operating system will do its best to give each process and its threads a fair share of execution time (note: "fair" does not mean "equal") while trying to minimize the costs of context switches between them.Being able to create the illusion of parallel processing on a single processor machine is a powerful concept, but it does not come for free. Use, but use with care.
What About Hardware That Supports Multiple Microprocessors?
Many servers and some desktop computers contain multiple microprocessors. Additionally "multicore" microprocessors are becoming more common; these have multiple processing units contained on a single chip and have many of the same advantages as having several physically separate processors. Having multiple processors allows for the possibility of true parallel execution of code.It is not inconceivable that in the future this trend will come to mobile devices. Although cost and power consumption concerns are inhibitors to having multiple physical central processing units in a mobile device, multicore processors mitigate many of these concerns. However, effective utilization of multiple processors requires dedicated support from the operating systemthis is not a trivial task. Multi-microprocessor support has not been a priority for most mobile device operating systems to-date; rather these operating systems tend to focus on compactness and efficiency instead of coordinating parallel execution on multiple processors.