Introduction To Algorithms 2Nd Edition Incl Exercises Edition [Electronic resources] نسخه متنی

اینجــــا یک کتابخانه دیجیتالی است

با بیش از 100000 منبع الکترونیکی رایگان به زبان فارسی ، عربی و انگلیسی

Introduction To Algorithms 2Nd Edition Incl Exercises Edition [Electronic resources] - نسخه متنی

Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein

| نمايش فراداده ، افزودن یک نقد و بررسی
افزودن به کتابخانه شخصی
ارسال به دوستان
جستجو در متن کتاب
بیشتر
تنظیمات قلم

فونت

اندازه قلم

+ - پیش فرض

حالت نمایش

روز نیمروز شب
جستجو در لغت نامه
بیشتر
لیست موضوعات
توضیحات
افزودن یادداشت جدید










3.2 Standard notations and common functions



This section reviews some standard mathematical functions and notations and explores the relationships among them. It also illustrates the use of the asymptotic notations.


Monotonicity


A function f(n) is monotonically increasing if m n implies f(m) f(n). Similarly, it is monotonically decreasing if m n implies f(m) f(n). A function f(n) is strictly increasing if m < n implies f(m) < f(n) and strictly decreasing if m < n implies f(m) > f(n).


Floors and ceilings


For any real number x, we denote the greatest integer less than or equal to x by x (read "the floor of x") and the least integer greater than or equal to x by x (read "the ceiling of x"). For all real x,





(3.3)

For any integer n,

n/2 + n/2 = n,

and for any real number n 0 and integers a, b > 0,





(3.4)





(3.5)





(3.6)






(3.7)

The floor function f(x) = x is monotonically increasing, as is the ceiling function f(x) = x.


Modular arithmetic


For any integer a and any positive integer n, the value a mod n is the remainder (or residue) of the quotient a/n:





(3.8)

Given a well-defined notion of the remainder of one integer when divided by another, it is convenient to provide special notation to indicate equality of remainders. If (a mod n) = (b mod n), we write a b (mod n) and say that a is equivalent to b, modulo n. In other words, a b (mod n) if a and b have the same remainder when divided by n. Equivalently, a b (mod n) if and only if n is a divisor of b - a. We write a b (mod n) if a is not equivalent to b, modulo n.


Polynomials


Given a nonnegative integer d, a polynomial in n of degree d is a function p(n) of the form


where the constants a0, a1, ..., ad are the coefficients of the polynomial and ad 0. A polynomial is asymptotically positive if and only if ad > 0. For an asymptotically positive polynomial p(n) of degree d, we have p(n) = Θ(nd). For any real constant a 0, the function na is monotonically increasing, and for any real constant a 0, the function na is monotonically decreasing. We say that a function f(n) is polynomially bounded if f(n) = O(nk) for some constant k.


Exponentials


For all real a > 0, m, and n, we have the following identities:























a0


=


1,


a1


=


a,


a-1


=


1/a,


(am)n


=


amn,


(am)n


=


(an)m,


am an


=


am+n.


For all n and a 1, the function an is monotonically increasing in n. When convenient, we shall assume 00 = 1.

The rates of growth of polynomials and exponentials can be related by the following fact. For all real constants a and b such that a > 1,






(3.9)

from which we can conclude that

nb = o(an).

Thus, any exponential function with a base strictly greater than 1 grows faster than any polynomial function.

Using e to denote 2.71828..., the base of the natural logarithm function, we have for all real x,





(3.10)

where "!" denotes the factorial function defined later in this section. For all real x, we have the inequality






(3.11)

where equality holds only when x = 0. When |x| 1, we have the approximation





(3.12)

When x 0, the approximation of ex by 1 + x is quite good:

ex = 1 + x + Θ(x2).

(In this equation, the asymptotic notation is used to describe the limiting behavior as x 0 rather than as x .) We have for all x,






(3.13)


Logarithms


We shall use the following notations:

















lg n


=


log2 n


(binary logarithm) ,


ln n


=


loge n


(natural logarithm) ,


lgk n


=


(lg n)k


(exponentiation) ,


lg lg n


=


lg(lg n)


(composition) .


An important notational convention we shall adopt is that logarithm functions will apply only to the next term in the formula, so that lg n + k will mean (lg n) + k and not lg(n + k). If we hold b > 1 constant, then for n > 0, the function logb n is strictly increasing.

For all real a > 0, b > 0, c > 0, and n,






(3.14)







(3.15)

where, in each equation above, logarithm bases are not 1.

By equation (3.14), changing the base of a logarithm from one constant to another only changes the value of the logarithm by a constant factor, and so we shall often use the notation "lg n" when we don't care about constant factors, such as in O-notation. Computer scientists find 2 to be the most natural base for logarithms because so many algorithms and data structures involve splitting a problem into two parts.

There is a simple series expansion for ln(1 + x) when |x| < 1:


We also have the following inequalities for x > -1:





(3.16)

where equality holds only for x = 0.

We say that a function f(n) is polylogarithmically bounded if f(n) = O(lgk n) for some constant k. We can relate the growth of polynomials and polylogarithms by substituting lg n for n and 2a for a in equation (3.9), yielding


From this limit, we can conclude that

lgb n = o(na)

for any constant a > 0. Thus, any positive polynomial function grows faster than any polylogarithmic function.


Factorials


The notation n! (read "n factorial") is defined for integers n 0 as


Thus, n! = 1 · 2 · 3 n.


A weak upper bound on the factorial function is n! nn, since each of the n terms in the factorial product is at most n. Stirling's approximation,






(3.17)

where e is the base of the natural logarithm, gives us a tighter upper bound, and a lower bound as well. One can prove (see Exercise 3.2-3)






(3.18)

where Stirling's approximation is helpful in proving equation (3.18). The following equation also holds for all n 1:






(3.19)

where





(3.20)


Functional iteration


We use the notation f(i)(n) to denote the function f(n) iteratively applied i times to an initial value of n. Formally, let f(n) be a function over the reals. For nonnegative integers i, we recursively define


For example, if f(n) = 2n, then f(i)(n) = 2in.


The iterated logarithm function


We use the notation lg* n (read "log star of n") to denote the iterated logarithm, which is defined as follows. Let lg(i) n be as defined above, with f(n) = lg n. Because the logarithm of a nonpositive number is undefined, lg(i) n is defined only if lg(i-1) n > 0. Be sure to distinguish lg(i) n (the logarithm function applied i times in succession, starting with argument n) from lgi n (the logarithm of n raised to the ith power). The iterated logarithm function is defined as

lg* n = min {i = 0: lg(i) n 1}.

The iterated logarithm is a very slowly growing function:




















lg* 2


=


1,


lg* 4


=


2,


lg* 16


=


3,


lg* 65536


=


4,


lg*(265536)


=


5.


Since the number of atoms in the observable universe is estimated to be about 1080, which is much less than 265536, we rarely encounter an input size n such that lg* n > 5.


Fibonacci numbers


The Fibonacci numbers are defined by the following recurrence:






(3.21)

Thus, each Fibonacci number is the sum of the two previous ones, yielding the sequence

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ... .

Fibonacci numbers are related to the golden ratio φ and to its conjugate , which are given by the following formulas:






(3.22)

Specifically, we have





(3.23)

which can be proved by induction (Exercise 3.2-6). Since , we have , so that the ith Fibonacci number Fi is equal to rounded to the nearest integer. Thus, Fibonacci numbers grow exponentially.

Exercises 3.2-1






Show that if f(n) and g(n) are monotonically increasing functions, then so are the functions f(n) + g(n) and f (g(n)), and if f(n) and g(n) are in addition nonnegative, then f(n) · g(n) is monotonically increasing.











Exercises 3.2-2






Prove equation (3.15).











Exercises 3.2-3






Prove equation (3.18). Also prove that n! = ω(2n) and n! = o(nn).











Exercises 3.2-4:






Is the function lg n! polynomially bounded? Is the function lg lg n! polynomially bounded?











Exercises 3.2-5:






Which is asymptotically larger: lg(lg* n) or lg*(lg n)?











Exercises 3.2-6






Prove by induction that the ith Fibonacci number satisfies the equality


where φ is the golden ratio and is its conjugate.











Exercises 3.2-7






Prove that for i 0, the (i + 2)nd Fibonacci number satisfies Fi+2 φi.











Problems 3-1: Asymptotic behavior of polynomials






Let


where ad > 0, be a degree-d polynomial in n, and let k be a constant. Use the definitions of the asymptotic notations to prove the following properties.



  1. If k d, then p(n) = O(nk).




  2. If k d, then p(n) = (nk).



  3. If k = d, then p(n) = Θ(nk).



  4. If k > d, then p(n) = o(nk).



  5. If k < d, then p(n) = ω(nk).













Problems 3-2: Relative asymptotic growths






Indicate, for each pair of expressions (A, B) in the table below, whether A is O, o, , ω, or Θ of B. Assume that k 1, > 0, and c > 1 are constants. Your answer should be in the form of the table with "yes" or "no" written in each box.




























A


B


O


o




ω


Θ


a.


lgk n


n


b.


nk


cn


c.




nsin n


d.


2n


2n/2


e.


nlg c


clg n


f.


lg(n!)


lg(nn)












Problems 3-3: Ordering by asymptotic growth rates








  1. Rank the following functions by order of growth; that is, find an arrangement g1, g2, ..., g30 of the functions satisfying g1 = (g2), g2 = (g3), ..., g29 = (g30). Partition your list into equivalence classes such that f(n) and g(n) are in the same class if and only if f(n) = Θ(g(n)).




  2. Give an example of a single nonnegative function f(n) such that for all functions gi(n) in part (a), f(n) is neither O(gi(n)) nor (gi(n)).














Problems 3-4: Asymptotic notation properties






Let f(n) and g(n) be asymptotically positive functions. Prove or disprove each of the following conjectures.



  1. f(n) = O(g(n)) implies g(n) = O(f(n)).



  2. f(n) + g(n) = Θ(min(f(n), g(n))).



  3. f(n) = O(g(n)) implies lg(f(n)) = O(lg(g(n))), where lg(g(n)) 1 and f(n) 1 for all sufficiently large n.



  4. f(n) = O(g(n)) implies 2f(n) = O (2g(n)).



  5. f(n) = O((f(n))2).



  6. f(n) = O(g(n)) implies g(n) = (f(n)).



  7. f(n) = Θ(f(n/2)).



  8. f(n) + o( f(n)) = Θ(f(n)).













Problems 3-5: Variations on O and






Some authors define in a slightly different way than we do; let's use (read "omega infinity") for this alternative definition. We say that if there exists a positive constant c such that f(n) cg(n) 0 for infinitely many integers n.



  1. Show that for any two functions f(n) and g(n) that are asymptotically nonnegative, either f(n) = O(g(n)) or or both, whereas this is not true if we use in place of .



  2. Describe the potential advantages and disadvantages of using instead of to characterize the running times of programs.



Some authors also define O in a slightly different manner; let's use O' for the alternative definition. We say that f(n) = O'(g(n)) if and only if |f(n)| = O(g(n)).



  1. What happens to each direction of the "if and only if" in Theorem 3.1 if we substitute O' for O but still use ?



Some authors define Õ (read "soft-oh") to mean O with logarithmic factors ignored:

Õ (g(n)) = {f(n): there exist positive constants c, k, and n0 such that 0 f(n) cg(n) lgk(n) for all n n0}.



  1. Define and in a similar manner. Prove the corresponding analog to Theorem 3.1.













Problems 3-6: Iterated functions






The iteration operator* used in the lg* function can be applied to any monotonically increasing function f(n) over the reals. For a given constant c R, we define the iterated function by


which need not be well-defined in all cases. In other words, the quantity is the number of iterated applications of the function f required to reduce its argument down to c or less.

For each of the following functions f(n) and constants c, give as tight a bound as possible on .


































f(n)


c




a.


n - 1


0


b.


lg n


1


c.


n/2


1


d.


n/2


2


e.




2


f.




1


g.


n1/3


2


h.


n/lg n


2














/ 292