Asymptotic complexity of algorithms (the “big-O” notation) is a staple of computer science and engineering. We usually focus on asymptotic worst-case time complexity. For example, we know that quicksort can take up to `n`^{2} steps to sort a sequence of `n` elements (within a constant factor), so we denote the time complexity of quicksort as O(`n`^{2}). This expression represents the worst-case algorithm execution time as a function of input size in an architecture-independent way. **Or does it?**

Not quite. As architects, we know very well how data locality affects execution time. Any computation operation needs data. Depending on where the data comes from (registers, cache, DRAM, or disk), the latency of accessing the data varies and is often much larger than the latency of actual computation. More importantly, this data access latency is **not just a constant factor**.

Asymptotically, as `n`, the size of the input set, approaches infinity, the space needed to store the input set also approaches infinity. Assuming storage density to be finite, that is a constant factor, the input set takes up O(`n`) space. If the `n` input set elements are stored in three dimensions, then the distance of an input set element to the computational unit is O(∛`n`). Since the speed of information transfer is finite, the latency of accessing any input set element is also O(∛`n`).

Therefore, for an algorithm that does not exhibit data locality (that is, each computation step may access any element of the input set), the latency of getting the data for each computation step grows asymptotically as O(∛`n`). On the another hand, if an algorithm does exhibit data locality, its data access latency will grow slower than O(∛`n`).

In 1987, Alok Aggarwal et al. extended traditional time complexity to consider this asymptotic impact of data access latency. Here’s the citation:

A Model for Hierarchical Memory

Alok Aggarwal, Bowen Alpern, Ashok K. Chandra, and Marc Snir

In Proc. 19th Annual ACM Symp. on Theory of Computing (STOC ’87), 1987.

After establishing the general ideas, the paper looks specifically at how data locality affects asymptotic time complexity of matrix multiplication, fast Fourier transform, and sorting. For some reason, the authors consider data access latency to grow as O(log `n`) instead of O(∛`n`); a lot of the paper, however, is independent of this assumption.

### Like this:

Like Loading...

##
About rustam

Computer architecture graduate student at The University of Texas at Austin.