-
-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[REVIEW]: PyLogGrid: A Python package for fluid dynamics on logarithmic lattices #6439
Comments
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks. For a list of things I can do to help you, just type:
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
|
Software report:
Commit count by author:
|
Paper file info: 📄 Wordcount for ✅ The paper includes a |
License info: 🟡 License found: |
|
Review checklist for @marlonsmathiasConflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
|
@editorialbot add @weiwangstfc as reviewer |
@weiwangstfc added to the reviewers list! |
@philipcardiff I have found some issues in the installation process, but I'm unsure how to proceed. Do I report them in this issue or do I open a new issue? |
Hi @marlonsmathias, please create a new issue in the main repository at https://github.com/hippalectryon-0/pyloggrid. Thanks. |
Review checklist for @slaizetConflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
|
Few comments about this submission: Contribution and authorship: Data sharing & Reproducibility: Performance: Example usage: Automated tests: State of the field: |
@philipcardiff I'm not sure what the procedure is in JOSS now that we have two reviews. Should I reply to the reviews directly here and make modification to check all the boxes ? Should I wait for a signal on your end ? |
Hi @hippalectryon-0, the JOSS review process is (should be) interactive. You iteratively address review comments as they come up; there is no need to wait for the reviewers to finish their reviews. So, please address or respond to any issues that have been raised here or in the issues section of your repository. If anything is unclear, feel free to ask the reviewer to clarify (or for my interpretation). Once both reviewers state that they are happy that everything is in order (and they have completed their checkbox list) then the review is complete. |
Hi @hippalectryon-0, let me know if any of the reviewers' comments are unclear to you. |
Sorry for the delay, I've been more busy than expected - I'll answer the comments shortly ! |
@marlonsmathias Thanks for your review !
|
@slaizet Thanks for your review ! Regarding the points you have deemed lacking:
Best, |
|
@slaizet Thank you for your answer.
Since the Github repository is a mirror of the private CEA Gitlab repository, indeed the direct contribution of each author is not clearly visible. As indicated in the original submission, "The authors contributed to this work in unequal proportions, with Amaury Barral taking the lead in the majority of the research, while the remaining authors made valuable but comparatively minor contributions." If this is still not precise enough, let me know precisely what you would like to know.
What kind of performance data do you feel is lacking precisely ? I believe that the page you mention explains all the existing tools to perform your own benchmark on your machine. Performance can greatly vary depending on CPU architecture, compiler version, etc.
I agree with you on that point. The reason why I did not e.g. provide a Rayleigh-Bénard example is because such an example also requires a fait bit of mathematical explanation regarding for example why log-lattice equations are different from usual RB equations (which is explained in our research paper), and I felt that overall that would be too convoluted for the documentation. Let me know if you really feel that this is a blocking point for the acceptation of the paper.
The tests are 100% automated (see e.g. https://github.com/hippalectryon-0/pyloggrid/blob/joss/log-grid/.gitlab-ci.yml). They're performed on each commit and merge request. The test coverage is 85%, with all major parts 100% covered.
Shell model simulations have indeed been performed in the past, but: 1) there's no widely used numerical library (that I know of) for shell models, as shell models are easy to re-implement from scratch for each paper 2) although log-lattices share some similarities with shell models, they're also fundamentally different. Therefore I can't think of a relevant package to compare this one to. |
Thank you for your reply. I do not know if your answer regarding the contributions of the authors and having a single example of use of the software is satisfactory for the journal (@philipcardiff should be able to answer). In terms of performance, it would be great to see some scalability plots, memory usage, etc on various hardware. Finally, I still think that a small literature review on shell model simulations would be helpful for the readers and for potential users of your software. |
Regarding authorship, the JOSS documentation states (note that authoring a commit is not a requirement):
@hippalectryon-0: based on your comments, I believe all five authors satisfy these requirements. Is this correct? Regarding the number of example cases, the JOSS documentation states:
Although possibly ambiguous, note the use of the word "examples" (plural). Also, the JOSS documentation states
Based on these two points, it seems reasonable that more than one test case should be included in the documentation. |
@editorialbot set 10.5281/zenodo.14356262 as archive |
Done! archive is now 10.5281/zenodo.14356262 |
@editorialbot generate pdf |
@editorialbot check references |
|
Sure ! https://github.com/hippalectryon-0/pyloggrid/releases/tag/2.3.2 |
@editorialbot set 2.3.2 as version |
Done! version is now 2.3.2 |
@editorialbot generate pdf |
Thanks, @hippalectryon-0. Can you update this line in the paper, |
done ! |
@editorialbot generate pdf |
@editorialbot generate pdf |
@kyleniemeyer: this submission is ready for processing. |
@editorialbot recommend-accept |
|
|
👋 @openjournals/pe-eics, this paper is ready to be accepted and published. Check final proof 👉📄 Download article If the paper PDF and the deposit XML files look good in openjournals/joss-papers#6250, then you can now move forward with accepting the submission by compiling again with the command |
@hippalectryon-0 it looks like the version on the Zenodo archive (v1) does not match the version accepted (v2.3.2). Can you correct the archive metadata? In addition, there are a number of proper names in the references that are not being capitalized properly, likely due to them missing curly brackets to protect in the |
Thanks for the feedback, both points should be fixed. |
However, the release 2.3.2 now points towards the "old" version - as does the zenodo archive (without the proper references). How should we resolve this ? |
@hippalectryon-0 looking at https://doi.org/10.5281/zenodo.14356262, it has the correct version number. I'm not sure what the problem is? |
Changes to the JOSS paper itself do not need to be reflected in the software version release / archive. |
Alright ! Let me know if anything else is missing. @kyleniemeyer |
Submitting author: @hippalectryon-0 (Amaury Barral)
Repository: https://github.com/hippalectryon-0/pyloggrid
Branch with paper.md (empty if default branch): joss
Version: 2.3.2
Editor: @philipcardiff
Reviewers: @slaizet, @marlonsmathias, @weiwangstfc
Archive: 10.5281/zenodo.14356262
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@slaizet & @marlonsmathias, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @philipcardiff know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @marlonsmathias
📝 Checklist for @slaizet
The text was updated successfully, but these errors were encountered: