You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using fsyacc for parsing of smaller test strings, the ctor of the AssocTable is very expensive (relative to the actual parsing). This is due to the initial size of the dictionary cache (2000).
This seems to be a semi-known issue as a comment in the code states:
// Note: using a .NET Dictionary for this int -> int table looks like it could be sub-optimal.
// Some other better sparse lookup table may be better.
Repro steps
Profiling a parse of any small input shows that "AssocTable..ctor" includes a significant portion of samples, and that "Dictionary`2[System.Int32,System.Int32]..ctor" is the culprit. Compiling with a smaller initial cache size (e.g. 20) increases the performance for these inputs.
Expected behavior
I expect it to be either just cheaper, or configurable in some way so that this will yield better performance for smaller parsing inputs.
Actual behavior
Not able to configure this in any way right now.
The text was updated successfully, but these errors were encountered:
Description
When using fsyacc for parsing of smaller test strings, the ctor of the AssocTable is very expensive (relative to the actual parsing). This is due to the initial size of the dictionary cache (2000).
This seems to be a semi-known issue as a comment in the code states:
// Note: using a .NET Dictionary for this int -> int table looks like it could be sub-optimal.
// Some other better sparse lookup table may be better.
Repro steps
Profiling a parse of any small input shows that "AssocTable..ctor" includes a significant portion of samples, and that "Dictionary`2[System.Int32,System.Int32]..ctor" is the culprit. Compiling with a smaller initial cache size (e.g. 20) increases the performance for these inputs.
Expected behavior
I expect it to be either just cheaper, or configurable in some way so that this will yield better performance for smaller parsing inputs.
Actual behavior
Not able to configure this in any way right now.
The text was updated successfully, but these errors were encountered: