-
Notifications
You must be signed in to change notification settings - Fork 508
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ts-jest should be replaced by Babel #521
Comments
Sorry, I'll have to disagree that it "should" be replaced. That's a very strong sentiment and Babel has a bunch of caveats, from not type-checking to being semantically different to not respecting You can customize your Jest config with a I'll also disagree with your characterization of that issue as "long-standing" -- it was responded to immediately and you were told that what you were seeking is impossible. You didn't respond with further clarifications, didn't provide a repro there or here, and there have been no up-votes in 1.5 years. I'm also fairly certain that In any case, sounds more like an upstream issue than here. You can customize Jest with a |
I don't think a reproduction is required since the issue is clear: making breaking changes to code that has a few or more dependents causes the Jest output to be flooded with type errors, and it's not possible to filter out the errors and focus on just part of the code. The workaround is to disable the Tooling and the defaults should guide users towards a sensible workflow, and it's unclear what the proposed sensible workflow would be in this case. A sensible workflow would be for the developer to be able to turn their focus to a specific part of the code, make changes to the API, then make the tests work (or vice versa), and only then update other parts and fix details like unused imports. It's what makes Jest a nice test runner, in that it supports working iteratively with short feedback cycles. There are projects that use babel-jest and dwarf most others in scale, so that's not an issue. There are caveats, but they're offset by the benefits, including not checking types in the test runner, which isn't a caveat. Regarding whether it's a long-standing issue: I reported it, found a workaround, and was told that the issue is inherent and so won't be fixed. There's no reason to pursue fixes to |
Both me and the person who responded in
Those are opinions. I've certainly missed type errors from my editor before. You seem to have been the only one to report this as an issue to
Also an opinion, but I've seen the pattern of having a
Refactoring breaking changes and your specific method of refactoring -- not everyone refactors the same way -- is not the most common use case. The most common use case is the one that should be optimized. More specific ones can be configured.
I didn't mention scale? Most JS ecosystem libraries also tend to be on the smaller side.
I'm not sure what benefits there are? What is and isn't a caveat and which is a better trade-off is an opinion. Well except that Babel explicitly calls those things caveats, so there's actually an official message on that...
This is semantics, but I interpret "long-standing issue" as a bug that hasn't been fixed in a long time or a controversial decision. This isn't either and you seem to be the only one to report that as an issue.
Sorry but these are not the same story. TSLint is deprecated and the TSLint team is helping the ESLint team. And ESLint has a dedicated TS parser that is separate from its Babel parser. It also lists caveats & trade-offs of using the Babel parser.
|
The pertinent question is about what would be the intended workflow. Breaking changes are part of a normal development workflow: either the tests are made to match the intended result and then the code is updated to pass the tests, or vice versa. In either case, both tests and types serve to find issues early. I outlined what happens currently with The point about scale was in response to the number of users: there are more popular adjacent projects that take a different tack, and having more users is a reason to take workflow into careful consideration. |
Breaking changes shouldn't be made frequently, they are breaking after all.
And
Ok, then it sounds like
It sounds like what you can do is to temporarily turn off diagnostics or temporarily re-configure a
Would you like to show examples? That would be a good way to support any argument.
Again, not everyone has the same workflow as you. Again, community-driven projects shouldn't make big changes solely based off one person's opinion & workflow. |
Sorry for being ambiguous; by "breaking changes" I'm talking about any type errors, and not just public API incompatibility. Even when talking about APIs, it's normal to create new features or new major versions, or to prototype or experiment, or to change private APIs, and temporary type errors are normal in all of these cases. I can just restate the question about why would it be useful to not see which tests are failing if there are type errors. There are straightforwardly better workflows: seeing type errors in the editor or in a separate It's not about specific preferences; temporary type errors are part of the normal use case and shouldn't require workarounds like disabling diagnostics. If not, it should be clearly described how type errors in the test output would be advantageous over the alternative. The most notable caveat of I've also not called |
Ok, but a lot of those errors would be relevant, wouldn't they? The tests might even be failing because of type errors
Personally, I've never used a separate
The type errors are only on files that are imported if I'm understanding correctly, so I wouldn't say that feature is made useless -- it is run on a subset. It sounds like you'd be okay with type-checking in tests if it were limited to only the code ran by the tests as opposed to the entirety of the file -- is that correct?
Well running a separate
Not sure what you mean by "not relevant". If
That's definitely a relevant workflow to consider given TSDX borrows some details from CRA. Are there any other examples that you think would be relevant to consider? |
The tests aren't even running if there are type errors, and if they are running, you can't see that they're failing or not because of the type errors. I showed a screenshot of what it can look like.
This is not a workable approach for larger code bases, since editors just check the files that are open.
That's again not a workable approach for larger codebases, it's unnecessary for errors like unused values, it's also unncecessarily manual, and it runs counter to an incremental workflow where changes are made step by step, like tests are made to work one by one, etc.
The output filtering in Jest just doesn't work on the type errors. There is no subset.
Running
I can say from extensive experience and using TS since it was new that there's nothing special about the workflow of fixing type errors or any errors incrementally. The point isn't to slave to the tool but to have it make development easier. For example, that's also why Parcel doesn't check types, since during development it's normal to have errors, and they should be checked just when they need to be; in the CI server at the latest.
I meant that
Nominally, but CRA has a significantly better TS DX. |
I figured I'd add my two cents since we're in more of a discussion phase. One of the things I find most attractive about TSDX is that it's somewhat strict by default. It's designed for building libraries and shared code. When I'm working on those kinds of projects, I really do want the coding standards to be higher. I'm going to want more strict and difficult to circumvent rules as a result. I realize that might slow development slightly because they'll have to comment out a variable they aren't using yet to get tests to pass but that's honestly fine with me. I doubt I'm the minority in that. Developers who want a less strict experience really have other options. |
This is in no way a suggestion to lower code standards; all code should still be type checked, but it's not necessary to do it in the test runner because of the reasons mentioned before. Even in the very simplest case of fixing unused identifiers, it takes away the limited developer attention to focus on a chore; attention that could instead be spent on improving code quality in a more meaningful way, because the unused identifiers would still break the build anyway and not pass CI, for example. Cases other than unused identifiers can easily be either much more involved to fix or even impracticable, leaving the user either to develop without tests (bad) or having to fix the setup (also bad). To reiterate, the suggestion isn't to stop the types from being checked, and the question to answer is why should it be the test runner that's checking the types, given the downsides. |
Issue about ts-jest vs Babel; quote from repo owner:
Another user about errors:
Here's a CRA feature request about type checking in tests where ts-jest's possible
The issue I linked to initially was from before Babel support for TS, and it was just one that I was familiar with. |
For what it's worth, unless one has a strong desire to type-check during tests, the main contributors of both It seems like since we already have a solid compilation step, we shouldn't need to also typecheck our tests 🤷♂ I also figured that it'd be nice to not have Jest be a chain dep that relies on updates from |
On ts-jest's statusSo in the
And the repo owner also said in that issue:
Misc
The build step doesn't type-check your actual test files. The discussion here isn't on using
We've already past this, but for double posterity, I did say in the very same sentence that I use my editor and then use
Yes, for other readers, this has evolved into a discussion of where to type-check. Moving forward
This is also dated. It's notable that:
My other concern is that |
Since there seems to be an interesting divide in schools of thought regarding type-checking in test files... I wonder if we can maintain backwards compatibility by letting people opt-in to Babel for tests, but keeping the existing ts-jest behavior as default. |
I don't know that the divide is interesting, since the point about workflow hasn't been substantively addressed, aside from mentioning that some don't find it as a problem. The workflow issue is this: unit tests provide isolation, and this isolation allows focusing on just a part of the code. For example, if you make an incompatible change, the isolation allows you to fix the incompatibility test-by-test. Jest supports this workflow very well, since it has a number of ways to filter which tests are run. For example, you change a method to take different parameters, and then fix all the call sites separately instead of having to do it all at once. The test output becoming obscured by type errors breaks this workflow. If you make an incompatible change, you have to fix it everywhere to make use of tests, and there's no workaround for it by design. Edit: I've not used TSDX since, because my use cases often involve designing APIs, and it's just not viable with the |
Hmmm. This is a very interesting debate. I find typechecking in jest to be very very useful. It is not always easy to identify a breaking type change with just a tsdx build. However, the current workflow has bit me in the ass a few times, especially when writing to formik 2. not being able to isolate a single test at a time because of type errors was super annoying and resulted in larger, although more stable, commits. I can see how this would be exacerbated on large projects. Regardless of the above, what @agilgur5 has mentioned about tsconfig not being respected and compilation possibly being different is a red flag. Before giving my 2 cents, I think it would be helpful for me to see a repo with babel-jest setup as discussed in the OP issue (without tsdx). Would be helpful to compare the workflow. |
@jaredpalmer I also mentioned several pre-reqs that TSDX would need to support for this kind of workflow, so it's relatively far out to even consider. @jaredpalmer @kylemh @slikts also in my previous comments I've listed various workarounds that one can do within Again, this is discussion is now around what the default workflow should be. Other than configuring
That's an understatement for there being several groups of users in and out of TSDX in favor of the current behavior... #681 was around this and #735 is around combining even more outputs in different places. |
Well summarized. I think we keep the default behavior then and document the workarounds explicitly |
Currently
ts-jest
is used for tests, and it has a long-standing issue that it doesn't support a sensible workflow for making changes: kulshekhar/ts-jest#798If I'm making breaking changes to the code and want to focus on a specific test suite or case, there's no way to filter out the "diagnostics" from the other test cases, so the output can become flooded with messages.
The solution is to let Babel handle TS, since it doesn't flood the Jest output with type errors.
The text was updated successfully, but these errors were encountered: