Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SimpleTokenizer Special function does not account for string length correctly #2

Open
arturo-myota opened this issue Dec 14, 2021 · 1 comment

Comments

@arturo-myota
Copy link

in lexer.go, line 164 the simple lexer can get a token reformatted/etc by a special function.
The special function returns the actual token to be used.

At line 185, the buffer is advanced by the length of the returned token from the Special function.
It should be advanced by the length of the consumed input.

Maybe Special should return the length it consumed?

@arnodel
Copy link
Owner

arnodel commented Dec 14, 2021

Yes you are correct - IIRC in my use cases the special function was returning the consumed input so it still behaved ok. Your suggestion seems sensible

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants