Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add initial Semantic token #133

Merged
merged 6 commits into from
Sep 16, 2024
Merged

Conversation

6cdh
Copy link
Contributor

@6cdh 6cdh commented Sep 9, 2024

Supported tokens: (variable function string number regexp)
Supported modifiers: (definition)
Supported method: textDocument/semanticTokens/range and textDocument/semanticTokens/full.

There are some difficulties to support tokens like comment, struct, class, etc.

I don't know how to implement a embeded sexp comment reader. And struct, class, etc requires a full functional type infer.

image:

Screenshot 2024-09-10 010323

image with a semantic highlight plugin which gives different variable different color:

Screenshot 2024-09-10 010355

doc.rkt Outdated Show resolved Hide resolved
doc.rkt Show resolved Hide resolved
doc.rkt Show resolved Hide resolved
struct.rkt Show resolved Hide resolved
tests/lifecycle/init_resp.json Outdated Show resolved Hide resolved
@6cdh
Copy link
Contributor Author

6cdh commented Sep 14, 2024

This PR has a serious problems, causes latency problem and makes other requests slower to response because it does not run concurrently.

@6cdh
Copy link
Contributor Author

6cdh commented Sep 14, 2024

I have no idea how to make it runs concurrently. That method needs to access doc-text which is a lsp-editor% instance. It's basically a text buffer implementation. But later requests might modify that text buffer. So we need to lock it, then it leaves no room to runs the semantic token request concurrently.

@dannypsnl
Copy link
Contributor

How about let others task can interrupt "semantic tokens"?

@dannypsnl dannypsnl mentioned this pull request Sep 14, 2024
@6cdh
Copy link
Contributor Author

6cdh commented Sep 14, 2024

It needs to access the text buffer during the whole process.

It obtains the text contained in the text buffer, calculate the tokens, then also use the text buffer class method to convert the token positions from character position format into line/character position format.

If it got interrupted in the middle of calculate tokens then the text buffer might got updated and produce unpredictable result.

@6cdh
Copy link
Contributor Author

6cdh commented Sep 14, 2024

I found we can use line/character style position directly after I states the whole process before.

And there is another problem that if people types quickly, then it can produce multiple requests. I guess we can always process only one request, and let new request replace the old processing one, like how we did for the traverse that send diagnostics.

@6cdh
Copy link
Contributor Author

6cdh commented Sep 14, 2024

It simply copy the text buffer and use it during processing.

It has usable latency now, although the fan is running loudly.

@6cdh
Copy link
Contributor Author

6cdh commented Sep 15, 2024

The design that new incoming task replace old running task can't apply. Because client can reuse the result of old requests, for example, insert some whitespaces, then client can calculate the offset and keeps old content highlight. But if the interrupted task returns an empty result, then the highlight would lost. The interrupted task can't just give up to return, because client might wait that request forever and cause resource leak.

I think this PR is completed now.

@jeapostrophe
Copy link
Owner

Thank you!

@jeapostrophe jeapostrophe merged commit 5f432f1 into jeapostrophe:master Sep 16, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants