*In which we introduce online algorithms and discuss the buy-vs-rent problem, the secretary problem, and caching.*

In this lecture and the next we will look at various examples of algorithms that operate under partial information. The input to these algorithms is provided as a “stream,” and, at each point in time, the algorithms need to make certain decisions, based on the part of the input that they have seen so far, but without knowing the rest of the input. If we knew that the input was coming from a simple distribution, then we could “learn” the distribution based on an initial segment of the input, and then proceed based on a probabilistic prediction of what the rest of the input is going to be like. In our analysis, instead, we will mostly take a worst-case point of view in which, at any point in time, the unknown part of the input could be anything. Interestingly, however, algorithms that are motivated by “learn and predict” heuristics often work well also from the point of view of worst-case analysis.

**1. Online Algorithms and Competitive Analysis **

We will look at online algorithms for optimization problems, and we will study them from the point of view of *competitive analysis*. The *competitive ratio* of an online algorithm for an optimization problem is simply the approximation ratio achieved by the algorithm, that is, the worst-case ratio between the cost of the solution found by the algorithm and the cost of an optimal solution.

Let us consider a concrete example: we decide to go skiing in Tahoe for the first time. Buying the equipment costs about $500 and renting it for a weekend costs $50. Should we buy or rent? Clearly it depends on how many more times we are going to go skiing in the future. If we will go skiing a total of 11 times or more, then it is better to buy, and to do it now. If we will go 9 times or fewer, then it is better to rent, and if we go 10 times it does not matter. What is an “online algorithm” in this case? Each time we want to go skiing, unless we have bought equipment a previous time, we have to decide whether we are going to buy or rent. After we buy, there is no more decision to make; at any time, the only “input” for the algorithm is the fact that this is the -th time we are going skiing, and that we have been renting so far; the algorithm decides whether to buy or rent based on . For deterministic algorithms, an algorithm is completely described by the time at which it decides that it is time to buy.

What are the competitive ratios of the possible choices of ? If , that is if we buy before the first time we go skiing, then the competitive ratio is 10, because we always spend $500, and if it so happens that we never go skiing again after the first time, then the optimum is $50. If , then the competitive ratio is , because if we go skiing twice then we rent the first time and buy the second, spending a total of $550, but the optimum is $100. In general, for every , the competitive ratio is

If , then the competitive ratio is

So the best choice of is , which gives the competitive ratio .

The general rule for buy-versus-rent problems is to keep renting until what we have spent renting equals the cost of buying. After that, we buy.

(The “predicting” perspective is that if we have gone skiing 10 times already, it makes sense to expect that we will keep going at least 10 more times in the future, which justifies buying the equipment. We are doing worst-case analysis, and so it might instead be that we stop going skiing right after we buy the equipment. But since we have already gone 10 times, the prediction that we are going to go a total of at least 20 times is correct within a factor of two.)

**2. The Secretary Problem **

Suppose we have joined an online dating site, and that there are people that we are rather interested in. We would like to end up dating the best one. (We are assuming that people are comparable, and that there is a consistent way, after meeting two people, to decide which one is better.) We could go out with all of them, one after the other, and then pick the best, but our traditional values are such that if we are dating someone, we are not going to go out with anybody else unless we first break up. Under the rather presumptuous assumption that everybody wants to date us, and that the only issue is who we are going to choose, how can we maximize the probability of ending up dating the best person? We are going to pick a random order of the people, and go out, one after the other, with the first people. In these first dates, we just waste other people’s time: no matter how the dates go, we tell them that it’s not them, it’s us, that we need some space and so on, and we move on to the next. The purpose of this first “phase” is to calibrate our expectations. After these dates, we continue to go out on more dates following the random order, but as soon as we found someone who is *better than everybody we have seen so far*, that’s the one we are going to pick. We will show that this strategy picks the best person with probability about, which is about .

How does one prove such a statement? Suppose that our strategy is to reject the people we meet in the first dates, and then from date on we pick the first person that is better than all the others so far. The above algorithm corresponds to the choice .

Let us identify our suitors with the integers , with the meaning that 1 is the best, 2 is the second best, and so on. After we randomly permute the order of the people, we have a random permutation of the integers . The process described above corresponds to finding the minimum of , where , and then finding the first such that is smaller than the minimum of . We want to compute the probability that . We can write this probability as

Now, suppose that, for some , . When does it happen that we do *not* end up with the best person? We fail to get the best person if, between the -th date and the -th date we meet someone who is better than the people met in the first dates, and so we pick that person instead of the best person. For this to happen, the minimum of has to occur in locations between and . Equivalently, we *do* pick the best person if the best among the first people happen to be one of the first people.

We can rewrite the probability of picking the best person as

To see that the above equation is right, because, in a random permutation, 1 is equally likely to be the output of any of possible inputs. Conditioned on , the minimum of is equally likely to occur in any of the places, and so there is a probability that it occurs in one of the first locations. (Some readers may find this claim suspicious; it can be confirmed by explicitly counting how many permutations are such that and , and to verify that for each and each the number of these permutations is exactly .)

So the probability of picking the best person is

And the last expression is optimized by , in which case the expression is .

Note that this problem was not in the “competitive analysis” framework, that is, we were not trying to find an approximate solution, but rather to find the optimal solution with high probability.

Note also that, with probability , the algorithm causes us to pick nobody, because the best person is one of the first with probability , and when this happens we set our standards so high that we are ending up alone.

Suppose instead that we always want to end up with someone, and that we want to optimize the “rank” of the person we pick, that is, the place in which it fits in our ranking from 1 to . If we apply the above algorithm, with the modification that we pick the last person if we have gotten that far, then with probability we pick the last person which, on average, has rank , so the average rank of the person we pick is . (This is not a rigorous argument, but it is close to the argument that establishes rigorously that the average is .)

In general, any algorithm that is based on rejecting the first people, and then picking the first subsequent one which is better than the first , or the last one if we have gotten that far, picks a person of average rank .

Quite surprisingly, there is an algorithm that picks a person of average rank , and which is then competitive for the optimization problem of minimizing the rank. The algorithm is rather complicated, and it is based on first computing a series of timesteps according to a rather complicated formula, and then proceed as follows: we reject the first people, then if we find someone in the first dates which is better than all the previous people, we pick that person. Otherwise, between the -th and the -th date, we are willing to pick someone if that person is either the best or the second best of those seen so far. Between the -th and -th date, we become willing to pick anybody who is at least the *third-best* person seen so far, and so on. Basically, as time goes on, we become increasingly desperate, and we reduce our expectations accordingly.

**3. Paging and Caching **

The next problem that we study arises in any system that has hierarchical memory, that is, that has a larger but slower storage device and a faster but smaller one that can be used as cache. Consider for example the virtual memory paged on a disk and the real memory, or the content of a hard disk and the cache on the controller, or the RAM in a computer and the level-2 cache on the processor, or the level-2 and the level-1 cache, and so on.

All these applications can be modeled in the following way: there is a cache which is an array with entries. Each entry contains a copy of an entry of a larger memory device, together with a pointer to the location of that entry. When we want to access a location of the larger device (a *request*), we first look up in the cache whether we have the content of that entry stored there. If so, we have a *hit*, and the access takes negligible time. Otherwise, we have a *miss*, and we need to fetch the entry from the slower large device. In negligible extra time, we can also copy the entry in the cache for later use. If the cache is already full, however, we need to first delete one of the current cache entries in order to make room for the new one. Which one should we delete?

Here we have an online problem in which the data is the sequence of requests, the decisions of the algorithm are the entries to delete from the cache when it is full and there is a miss, and the cost function that we want to minimize is the number of misses. (Which determine the only non-negligible computational time.)

A reasonably good competitive algorithm is to remove the entry for which *the longest time has passed since the last request*. This is the Least Recently Used heuristic, or LRU.

Theorem 1Suppose that, for a certain sequence of requests, the optimal sequence of choices for a size- cache causes misses. Then, for the same sequence of requests, LRU for a size- cache causes at most

misses.

This means that, for a size- cache, LRU is -competitive against an algorithm that knows the future and makes optimal choices. More interestingly, it says that if LRU caused misses on a size- cache on a certain sequence of requests, then, even an optimal algorithm that knew the future, would have caused at least misses using a size cache.

*Proof:* Suppose the large memory device has size , and so a sequence of requests is a sequence of integers in the range . Let us divide the sequence into “phases” in the following way. Let be the time at which we see the -th new request. Then the first phase is . Next, consider the sequence , and recursively divide it into phases. For example, if and we have the sequence of requests

then the division into phases is

In each phase, LRU causes misses.

Consider now an arbitrary other algorithm, operating with a size- cache, and consider its behavior over a sequence of phases which is like the above one, but with the first item in each phase moved to the previous phase

In the first phase, we have distinct values, and so we definitely have at least misses starting with an empty cache, no matter what the algorithm does. At the beginning of each subsequent phase, we know that the algorithm has in the cache the last request of the previous phase, and then we do not know what is in the remaining entries. We know, however, that we are going to see distinct requests which are different from the last request, and so at least of them must be out of the cache and must cause a miss. So even an optimal algorithm causes at least misses per phase, compared with the misses per phase of LRU, hence the competitive ratio.

(Note: we are glossing over the issue of what happens in the last phase, if the last phase has less than distinct requests, in which case it could happen that the optimal algorithm has zero misses and LRU has a positive number of misses. In that case, we use the “surplus” that we have in the analysis about the first phase, in which the optimum algorithm and LRU have both misses.)

It can be proved that, if we knew the sequence of requests, then the optimal algorithm is to take out of the cache the element *whose next request is further in future*. The LRU algorithm is motivated by the heuristic that the element that has not been used for the longest time is likely to also not be needed for the longest time. It is remarkable, however, that such a heuristic works well even in a worst-case analysis.

Are the phases correct in the example in the proof of theorem 1? It looks like the first phase ends too soon.

In secretary problem it would be (i/j) not i/j.

sorry (1/j) not (i/j)

Agree with Brian. The first phrase should end at the 3 before 21.

Or the second 12 should be 21. In this way, both examples are right.