-
-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Now using DISTINCT in ivoid-selecting registry subqueries. #572
Now using DISTINCT in ivoid-selecting registry subqueries. #572
Conversation
cea3d07
to
7db0cfb
Compare
This is not only more logical (let's filter out duplicates as early as possible)but experimentally also is enough to keep the planner going off to follies of the type encountered in bug astropy#571.
7db0cfb
to
c290c03
Compare
I think no-changelog-entry is roughly right, although I have added the new PR number to the changelog entry about subqueries (that doesn't seem to placate the CI, though). Anyway, the current test failure appears to be unrelated to the change at hand (though I'd say we should do something about the sia2 tests in general, as they generally take ages). So, I think this is ready to go. |
I was unsure about the changelog. But I guess it can be merged with the failure. I don't have time to review today but I'll look tomorrow. These two successive PRs make me wonder how we could detect that a change that was done makes everything slower. Would it be wise to benchmark? |
On Tue, Jul 02, 2024 at 05:46:43AM -0700, Manon Marchand wrote:
These two successive PRs make me wonder how we could detect that a
change that was done makes everything slower. Would it be wise to
benchmark?
I'm afraid this is one of the many things where we can't sensibly
defend with unit tests or something like them.
The problem here was, at its core, a specific fluke on one specific
service. Ok, it's the default one we use, but still: if we try to
even diagnose performance regressions, we will have useless test
suites because (a) they'll take forever to finish and (b) they'll
only pass once every blue moon because the more services you hit the
more certain it becomes that at least one of them will be down or
misbehaving at any time.
I'm still all for a wider array of tests, perhaps also human-executed
tests (I'd be in for checking a notebook or two), to be executed
before we do a release. This would have caught this problem, but
frankly, that would mainly have been sheer luck, too.
|
adding the PR number is certainly the preferred way for the changelog for this. Or calling it "affect-dev", and when that label present the checker expects no changelog entry. The reason for the CI job to fail was the missing milestone though. |
That's temporary and is due to cadc experiencing some server side issues. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, and I can confirm the speed improvement compared to main
(and even main
was speedier than what we had in the last release, so I cannot really benchmark this further)
I think that the extra resources required by |
This is not only more logical (let's filter out duplicates as early as possible)but experimentally also is enough to keep the planner going off to follies of the type encountered in bug #571.