You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is not unusual, especially in large models with lots of data, to compute some more data derived from the instance data.
For example, to compute some integer for declaring the size of some arrays.
The derived data is conveniently expressed as par functions, like so:
function int: myDerivedData(myEnum: x) = (some complex function of x);
Now, if such functions are used extensively in the model, I have observed significant flattening overhead. Sometimes that overhead can be eliminated by the following trick:
function int: myDerivedData(myEnum: x) = myDerivedDataCache[x];
array[myEnum] of int: myDerivedDataCache = [some complex function of x | x in myEnum];
the idea being to nudge the flattener into creating myDerivedDataCache only once so that occurrences of myDerivedData() become really cheap. The problem is, the trick doesn't always work; oftentimes, the flattener behaves as if it does no such caching.
My question: is there some annotation that one can add to the array definition that will force the flattener to cache?
Or some other way of triggering caching?
I tried :: output, which had no observable effect. I have a case where the flattening time is a real show-stopper, apparently for this reason.
The text was updated successfully, but these errors were encountered:
I think the real solution to this problem is to allow more tweaking of what is added to the CSE table. We currently already have a annotation ::no_cse that prevents certain functions/predicates from being added to the CSE table (and being cached). I think having a "please CSE" type annotation might make sense for these situations, and in this case override the CSE cache not being used because the expression is par.
I believe the best way to ensure that the array is only computed once is to ensure it ends up in the FlatZinc. The most reliable way I know to do that is to add a ::int_search (or similar) with the data to the model. Because the data is fixed it won't affect what the solver does, but I'm pretty sure they always end up in the solver model.
Dekker1
changed the title
Flattening performance trick - help needed
Allow caching of par functions
Aug 28, 2024
It is not unusual, especially in large models with lots of data, to compute some more data derived from the instance data.
For example, to compute some integer for declaring the size of some arrays.
The derived data is conveniently expressed as
par
functions, like so:Now, if such functions are used extensively in the model, I have observed significant flattening overhead. Sometimes that overhead can be eliminated by the following trick:
the idea being to nudge the flattener into creating
myDerivedDataCache
only once so that occurrences ofmyDerivedData()
become really cheap. The problem is, the trick doesn't always work; oftentimes, the flattener behaves as if it does no such caching.My question: is there some annotation that one can add to the array definition that will force the flattener to cache?
Or some other way of triggering caching?
I tried
:: output
, which had no observable effect. I have a case where the flattening time is a real show-stopper, apparently for this reason.The text was updated successfully, but these errors were encountered: