-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Experiment: Use native php tests #956
base: main
Are you sure you want to change the base?
Conversation
Fixes a parse error when passing multiple argument spreads such as `foo(...$bar, ...$baz)`. Adds extra checks for function definitions with variadic parameters, followed by any other parameters. Fixes #946
Fix error on variadic function calls.
chore: upgrade dependencies
Hi @MaartenStaa |
@MaartenStaa In the prettier PHP plugin, we're parsing plain php files for snapshot testing using jest setupFiles, thereby avoiding the code generation step: https://github.com/prettier/plugin-php/blob/main/tests_config/run_spec.js |
@MaartenStaa this is really cool, should we just create a test blacklist for failing tests, so we can work towards merging this, and them working on fixing all the cases where it fails? |
@czosel I had an idea the other day. A decent number of the bug reports on this project are due to small correctness issues. So I thought, is there a way to get ahead of the reports, and find them ourselves?
Well, the PHP project itself has many tests (
.phpt
files), so I set up this experiment. I pull in the php-src repository as a submodule, and made a script that generates Jest unit tests from the PHP tests, marking which should fail and which not. This branch and PR are the result.As you can see in the commits, I've already tackled several correctness issues, but there are still plenty of failing tests. I'm jumping the gun and opening the PR to get early feedback for the idea and approach.
List of still failing tests:
Note: it's possible that some of these are beyond the scope of this project, or are cases where the test generator is incorrectly set to expect the test to fail, or doesn't mark a test as one that should fail where it should.