TR-2007-17

Imperative Self-Adjusting Computation

Umut A. Acar; Amal Ahmed; Matthias Blume. 10 September, 2007.
Communicated by Robby Findler.

Abstract

Recent work on self-adjusting computation showed how to systematically write programs that respond efficiently to incremental changes in their inputs. The idea is to represent changeable data using modifiable references, i.e., a special data structure that keeps track of dependencies between read and write-operations, and to let computations construct traces that later, after changes have occurred, can drive a change propagation algorithm. The approach has been shown to be effective for a variety of algorithmic problems, including some for which ad-hoc solutions had previously remained elusive.

All previous work on self-adjusting computation, however, relied on a purely functional programming model. In this paper, we show that it is possible to remove this limitation and support modifiable references that can be written multiple times. We formalize this using a language AIL for which we define evaluation and change-propagation semantics. AIL closely resembles a traditional higher-order imperative programming language. For AIL we state and prove consistency, i.e., the property that although the semantics is inherently non-deterministic, different evaluation paths will still give observationally equivalent results. In the imperative setting where pointer graphs in the store can form cycles, our previous proof techniques do not apply. Instead, we make use of a novel form of a step-indexed logical relation that handles modifiable references.

We show that AIL can be realized efficiently by describing implementation strategies whose overhead is provably constant-time per primitive. When the number of reads and writes per modifiable is bounded by a constant, we can show that change propagation becomes as efficient as it was in the pure case. The general case incurs a slowdown that is logarithmic in the maximum number of such operations. We use DFS and related algorithms on graphs as our running examples and prove that they respond to insertions and deletions of edges efficiently.

Original Document

The original document is available in PDF (uploaded 10 September, 2007 by Robby Findler).