-
Notifications
You must be signed in to change notification settings - Fork 172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Minimize semantic tokens returned for constants and method calls #2479
Conversation
02e77cc
to
3547038
Compare
b4e6206
to
10ac664
Compare
10ac664
to
ab1f104
Compare
Do you know of any other LSP servers that use this approach of providing only partial semantic highlighting? Just wondering if there's any downsides we may be missing. |
I think that's the standard approach. I've never seen a language server that returns all tokens available. Typically, you only return what cannot be determined using Text Mate grammars. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Worth also updating requests/semantic_highlighting.rb
to explain the new approach?
Added an explanation. |
Motivation
Another step for #2461
This PR minimizes the number of semantic tokens returned for constants and method calls.
After this PR, in a ~4k file where we used to find ~20k tokens, we now find about ~8k tokens. In terms of round-trip response time for semantic highlighting, on my machine it brings it down to 4 seconds from 12 seconds.
While it doesn't solve the problem completely for these super large files, the performance will be significantly better across the board.
Implementation
No constant references are ambiguous, so I stopped returning tokens for all of them.
For method calls, there's only one ambiguous syntax, which is method calls with no receiver and no parenthesis. I stopped returning tokens for the cases that are not needed.
Only the first commit has implementation. The rest is just updating expectations to reflect the reduced number of tokens.
Automated Tests
Updated existing ones. Removed some that no longer made sense.