-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[REVIEW]: Check your outliers! An introduction to identifying statistical outliers in R with *easystats* #221
Comments
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks. For a list of things I can do to help you, just type:
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
|
|
Wordcount for |
|
@editorialbot generate my checklist |
@stats-tgeorge - Not sure where to file this issue - but it seems that the |
@editorialbot commands |
I'm sorry human, I don't understand that. You can see what commands I support by typing:
|
@editorialbot commands |
Hello @nniiicc, here are the things you can ask me to do:
|
@editorialbot generate pdf |
@editorialbot check repository |
|
Wordcount for |
@editorialbot check repository |
|
Wordcount for |
@editorialbot check repository from branch JOSE_paper |
|
Wordcount for |
@editorialbot set JOSE_paper as branch |
Done! branch is now JOSE_paper |
Thanks for the clarification!
All fixed now. New doi: 10.5281/zenodo.8411224 |
@editorialbot check references |
|
@editorialbot generate pdf |
@editorialbot recommend-accept |
|
|
👋 @openjournals/jose-eics, this paper is ready to be accepted and published. Check final proof 👉📄 Download article If the paper PDF and the deposit XML files look good in openjournals/jose-papers#139, then you can now move forward with accepting the submission by compiling again with the command |
Hello @openjournals/jose-eics I believe this one is ready. TY! |
Hi, just wanted to check on the current status of this submission - is there anything we can or need to do/check before publication of the paper? |
On a first browse of this submission, I am confused. The repository link points to the package |
I am looking through the history of this submission, and found the first review checklist, showing that this was submitted as a "learning module." (JOSE accepts two kinds of papers: those reporting on learning modules, and those reporting on educational software.) However, the associated repository that has been linked to this submission points to the software package Do I take it that the authors have written their "learning module" in the paper itself? And there is no associated "learning module" to go along with this submission? |
That is correct; the learning module is within the paper. The paper was in the JOSE branch via the repo link at the top but in order for the authors to create a zenodo archive they had to reorganize. The repo for performance still has a folder with the JOSE paper in it. |
Yikes. I'm afraid this breaks the JOSE model completely. We can't link the paper to the repository of the software—it is a scholarly output that was already published in JOSS. I apologize to the authors and reviewers that this was not flagged earlier in the process. The essential philosophy of JOSE is one that matches JOSS, where papers report on separate scholarly artifacts that are traditionally not rewarded with publication in the prevailing system. But in JOSE, the papers are about open source teaching materials that are intended to be reused or modified in the open source model, or about educational software. We do not publish papers that are themselves, in their narrative, intended to be the lesson material. This should be clear in the documentation, but if it is not, please help us see where the confusion arises. |
We have the full educational module (i.e., an extended version of the paper) as a vignette: https://easystats.github.io/performance/articles/check_outliers.html It includes a statement about reuse of the material and instructions on how to contribute to the educational module. The repo of the vignette is still in the |
I'm not really following how the vignette for this paper living within a much larger software repo contradicts the mission of the JOSE to support open source educational materials. Nothing about the current outliers paper was previously published or recognized by JOSS. If it would be preferred, we can move the outliers vignette to its own repository. |
@labarba I think there is some confusion about the review process we've already been through and the vingettes that @rempsyc points to above... TLDR - please give advice on Issues below
See #221 (comment) - this issue was raised, and @stats-tgeorge consulted an EiC There seems to be three issues:
@bwiernik suggestion to move the vignettes to a new repo should solve both issues 1 and 2
If 3 is really an issue for JOSE editors - I recommend for the sake of good will and for seeing @rempsyc's hard work rewarded that we give some very clear directions on how to extend the vignettes into an appropriate OER submission, and how he should revise the actual JOSE paper accordingly. *I realize that giving this kind of feedback (in Issue 3) might not be what an EiC commonly does w/r/t a JOSE submission - but I want to advocate strongly on the lead authors behalf here - he wrote out these modules seeking credit for educational work he is doing with a statistical package he helped develop for a postdoc application he is applying to....The form or structure of the submission aside, this scenario is exactly what I believe JOSE was designed to address. If we can't offer an early-career computational scientist with a venue to document their open educational statistical work - that IMHO breaks the JOSE model completely |
Again, I apologize to the authors and reviewers for the confusion. I take full responsibility for this as EiC. As a way of explanation, when a new submission comes in, I inspect it with the goal of finding the best fit of handling editor. This is often complicated by having to balance the load across all editors. Given my own workload, this inspection cannot be deep, so I missed that: (1) the linked repo was to an already published software artifact; (2) the learning material was contained in the paper itself. The issues above should have been raised in the review process, however. It looks like they were not flagged then, either. I acknowledge that the handling editor (@stats-tgeorge) did message me in Slack about this submission, and I now have found that he wrote "Their learning content is within the JOSE paper" within a longer message—but again I missed this. As a side note: you will notice that the editorial bot crawls the repository, and reports statistics. We use these to inform an assessment of the scale of scholarly work. In this case, they were reporting on the full software repo, and thus gave an incorrect signal of the work being submitted for review here. From the comment above, I see that you have crafted a "vignette," which is written in R markdown, hosted in your software repository, and can be read online. THAT should have been the material reviewed, and about which the paper (briefly) reports. This seems to be another boo-boo in the review process here, namely, the paper is too long (editorialbot reports 3428 words). JOSE papers should be short and contain specific sections. From the documentation:
In sum, the educational content the paper is about should exist in an open repository, while the paper contains only specific items of description. When assessing the scholarly contribution of the learning content, one criterion is the length, as measured by the editorialbot stats on the repo. Therefore, it is helpful if the repo is not bloated by extraneous materials. One solution can be to make a branch where you remove all other material and leave the vignette only, and we use that branch of the repo to link to the JOSE submission. The Zenodo archive should also be of the learning content as hosted in its source repository. Note that the JOSE publishing process makes a Crossref deposit where the DOI of the paper includes the DOI of the archive as a linked artifact via the metadata. The paper itself will have to be cut down to just the requested sections, with the learning content itself residing solely in the vignette. I do have additional concerns about the length of the vignette, and point you to the documentation, where we state:
JOSE has published papers on shorter learning modules, and the editors have discussed several times our varying opinions on what constitutes substantial scholarly contribution. We don't all agree, and we accept that length by itself is an imperfect criterion. Our determination was that shorter lessons can merit publication when they are supported by other aspects of scholarly contribution, e.g., if the lesson has been taught and has been assessed in the classroom. I'll stay tuned here for your reactions or comments and commit to a constructive approach to resolving the muddle. |
@nniicc — please note that I did not see your post until after mine. I have been crafting my response for the past hour with this page open while focusing on the "Write" tab of my own comment. |
Hey all, First of all I want to apologize and take some credit for being uninformed. This is my second review as an coordinating editor and thus I don't know all of the JOSE fine details yet. I also miss-understood some of the documentation. I do want to encourage the authors to add to the vignette and and restructure as the EiC mentioned so we can review the new versions and get this published with JOSE. Let me know of additional questions and when you are ready to have new items reviewed! |
Dear reviewers and editors, First, I would like to thank you all for contributing your reflections and viewpoints on this matter. We have decided to withdraw this publication from JOSE to submit to a different journal. Ultimately, we realize that the format we were envisioning (i.e., the paper itself being the resource) is not compatible with the JOSE model (i.e., the paper simply linking to the resource), as we want the paper to be cited for its best practice recommendations in itself like a regular paper (people would probably not cite a short paper describing a vignette on good practices). This is due to a misunderstanding on our part; we missed that the paper itself could not contain educational material. We recognize that our desired outcome would break the JOSE model, so we believe it is a better outcome for both us authors and JOSE to proceed with the paper withdrawal. I would like to reassure the reviewers and editors that their work has not been in vain, since it has contributed to making the paper (and associated resources) stronger. We thank you for that. Because of our paper withdrawal, I think this issue can be closed. Thank you for your understanding, and I will let you know if this gets published elsewhere :) |
Thanks for the thoughtful decision and your kind words. I will proceed to withdraw the submission, and wish you the best of luck with your next steps. |
@editorialbot withdraw |
Paper withdrawn. |
Dear editors and reviewers, I have the wonderful news that this week the paper was accepted in Behavior Research Methods. This has been a long ride and we have faced several journal rejections along the way, but it has been worth it. On behalf of the team, I would like to thank everyone involved in this constructive process that contributed to making the paper and related materials stronger. Thank you 🙏
|
Submitting author: @rempsyc (Rémi Thériault)
Repository: https://github.com/easystats/performance
Branch with paper.md (empty if default branch): main
Version: v0.10.6 [branch = JOSE_paper]
Editor: @stats-tgeorge
Reviewers: @nniiicc, @lebebr01
Archive: 10.5281/zenodo.8411009
Paper kind: learning module
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@nniiicc & @lebebr01, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://openjournals.readthedocs.io/en/jose/reviewer_guidelines.html. Any questions/concerns please let @stats-tgeorge know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @nniiicc
📝 Checklist for @lebebr01
The text was updated successfully, but these errors were encountered: