Increase time complexity to overcome space complexity
An answer to this question on Stack Overflow.
Question
So I have an array 'a0' of size let's say 105, and now I have to make some changes in this array. The ith change could be calculated using a function f(ai-1) to give ai in O(1) time, Where aj denotes array 'a' after jth change has been made to it. Meaning that ai could be calculated if we know ai-1 in constant time. I know that I have to make 105 changes beforehand.
Now the problem asks me to answer large number of queries such as ai[p]-aj[q], where ax[y], represents yth element of the array after xth change has been made to the array a0.
Now if I had space of the order of 1010, I could easily solve this problem in O(1) by storing all the 105 arrays beforehand but I don't (generally) have that kind of space. And I could also answer these queries by each time generating ai and aj from scratch and answering the queries but I can't afford that kind of time complexity either, so I was wondering if I could monitor this problem using some data-structure.
EDIT: Example:
We define an array B= {1,3,1,4,2,6}, and we define aj as the array storing the frequency of ith number after jth element has been added to B. That is, a0={0,0,0,0,0,0} now a1={1,0,0,0,0,0}, a2={1,0,1,0,0,0}, a3={2,0,1,0,0,0} a4={2,0,1,1,0,0} a5={2,1,1,1,0,0} and a6={2,1,1,1,0,1}.
f(aj) just adds a an element to B and updates the value of aj-1.
Answer
If there is some a maximum number of states N, then checkpoints are a good way to go. For instance, if N=100,000, you might have:
c0 = [3, 5, 7, 1, ...]
c100 = [1, 4, 9, 8, ...]
c200 = [9, 7, 1, 2, ...]
...
c10000 = [1, 1, 4, 6, ...]
Now you have 1000 checkpoints. You can find the nearest checkpoint to an arbitrary state x in O(1) time and reconstruct x in at most 99 operations.
Riffing off of my comment on your question and John Zwinck's answer, if your mutating function f(*) is expensive and its effects are limited to only a few elements, then you could store the incremental changes. Doing so won't decrease the time complexity of the algorithm, but may reduce the run-time.
If you had unlimited space, you would just store all of the checkpoints. Since you do not, you'll have to balance the number of checkpoints against the incrementals appropriately. That will require some experimentation, probably centered around determining how expensive f(*) is and the extent of its effects.
Another option is to look at query behavior. If users tend to query the same or nearby locations repeatedly, you may be able to leverage an LRU (least-recently used) cache.