-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[High Scores]: Improve Test Cases / Exercise for Mutability Discussions #2786
Comments
This comment was marked as resolved.
This comment was marked as resolved.
Hi @12rambau 👋🏽 Thanks for filing this issue. I think you bring up some valid points, some of them addressable, and some of them not addressable. Bear with me, there is a bit to unpack here:
This is "by design". For Practice exercises, we give very minimal stubs. The 'ethos' behind this is that it is "test driven development". You should be looking at the tests to get a feel for what is expected.
You are most likely seeing a difference between concept exercises (these are the ones linked from the main "nodes" of the site syllabus tree and practice exercises (everything else). Concept exercises are meant to zero in on one set of closely related Python Language concepts. They have fairly particular solutions, and very detailed instructions and stub files. The test files for these exercises are hidden on the site, and are not intended to be referred to or read by the student prior to solving the problem -- hence the detailed stubs and task sets. Practice exercises are much looser, and focus more on a problem that practices multiple techniques in the language and/or algorithms or design decisions. For those problems, implementation is less directed, problem descriptions are less detailed (and are mostly shared across tracks) and the test files are intended to be looked at as part of the problem-solving process. Due to these differences in intent/process, we have not added detailed stubs nor type hinting to practice exercise files, and are not likely to do so. We don't want to over-specify implementation, and we expect that students will (mostly) engage with mentors to talk through different design decisions and approaches.
I agree with you. As this exercise is currently implemented in Python (functions only, no Python track tests for practice exercises are atuo-generated based on specifications from the problem-specifications repo. Descriptions and data for these exercises are shared across tracks, and so are declarative and rather non-specific. Tracks can (and do) deviate -- but they have to do so through instruction and test appends -- OR -- the proposed change has to be accepted by 3 maintainers in problem-specifications and rolled out cross-track. So this isn't a matter of "simply" editing the test files or test cases -- for Python, we will need to edit:
So I am not against changing things. In fact, I think this exercise might be better in its original version, where we asked students to make a class with various methods to track So I see several "paths" to addressing this issue:
Happy to discuss all of this further. Also happy to have you PR the work. But at this time, my vote would be for options 1 or 2 for expediency, and then a longer discussion about re-working the exercise. Hope that all makes sense -- and thank you for reading all this! |
woaw, that's a complex and very specific process. But that's great ! Thanks for enlightening me on the difference between Concept exercises and Practice exercises that's more clear now and I think forcing us to do some TDD is a good thing. On my issue, and as the process require a lot more work than I expected, I'll see how much time I can dedicate to it but I'll be glad to see what's the worklfow to participate in exercism. I'll keep you updated this WE |
Just to be clear: This complexity is for an existing practice exercise and amending or changing the exercise or the tests. Feel free to hit us up (@J08K and myself are the maintainers for Python) if you are looking for other, and less involved ways of contributing. |
@12rambau -- Just checking in here. 😄 I went ahead and backed out the mutability test cases and regenerated the test files in PR #2808. This way, there is no need for effectively "empty" or "useless" functions in the exercise. But I am going to leave this issue open in case you'd like to PR additional test cases for Python that would get at discussing mutability issues. Or if you would like to work on converting this exercise to one that uses a Just let I or @J08K know -- we'd be happy to help you with test case appends, or any other questions you have. |
I think this exercise could be super useful for mutability, unfortunately, I'm swamped in webinars for work and preparation is so much time-consuming.... Thanks a lot for your PR and I agree we should leave it open, at some point I'll have time to breath and work a PR for this exercice! |
@12rambau -- just a note here. Mutability has come up in another exercise discussion - this one around |
@IsaacG -- pasting my last comment from #3008 below:
Elsewhere. Maybe we do it in 2786, since that's the last filed issue against High Scores, and does discuss mutability. That way, the author of that issue will also get a ping letting him know you've picked it up - in case he wants to join in the discussion. I think (for now) we close this issue -- and also the other
Thats what the original class-based High Scores did, and you can see that in the
This may be a case where simply re-instituting the exercise as it was in 2018/2019 will do the trick, and then altering the test generation template to import the class instead of functions, and changing the But I think we should also take a look at the point you were making with |
@IsaacG - The resurrected files from High Scores are here in a draft PR. As noted, there is still work around getting the immutability cases marked true and modifying the JinJa2 template to support them. It should be fairly straighforward. After the template is modified, the test case file will need to be regenerated. Let me know if you have questions or issues. Since I did have to do some work to make the JinJa2 template work, I'd prefer to get credit for that via the PR -- so I don't know if we want to merge and then have you do the modifications separately, or if you modify the PR directly by branching or forking. |
here is the canonical data for the exercise. |
Thoughts? |
I remain skeptical that this particular concept can be taught through tests. It really feels like it needs to be a discussion with a mentor who can answer questions and highlight nuance. Unless a student is given good "whys" around this, it becomes an arbitrary and cargo-cult thing.
That is true. But this exercise (right now) is fairly early in the progression and there is no intuitive reason for the student to expect that there be any sort of central "state" or "score" list (since the data is being passed in with each test). Because there is no intuitive reason to look out for mutability issues (a scores constant, a global stash, an object attribute, etc), the immutability requirement is arbitrary. Why would a test validate object identity and look for mutated data? A general rule? But how does that help me with Python? So it might be a better prompt to push the student into having a
More or less what is there in the draft PR, with logic added to support some of the immutability test cases from
Either works. I do think we need a few tests to make sure that the student is nudged toward having a |
I just did this exercise in php and python and I think that the Python version would require some improvement. I would like to discuss if it makes sense and eventually contribute via PR.
explanations
When you jump in the exercise, the explanation are very small. the user faces 4 functions and without looking at the test I would have never understood what the last were about (https://github.com/exercism/python/blob/main/exercises/practice/high-scores/high_scores.py).
In some exercises there is a complete description of the tasks and in other the function objective is described in the docstring (or bot). Would it make sens to add this for this exercise?
latest_after_top_three
andscores_after_top_three
I think these 2 functions are perfectly useless in Python. I actually validated the tests (
python/exercises/practice/high-scores/high_scores_test.py
Line 50 in 79515ff
From what I did in php I think that these function reflect the C like behavior where passing
scores
by reference in the latest or personal-top-three function could have altered the initialscores
list.So instead of having these functions, I think it would make more sense to test the
scores
variable after runningpersonnal_top_three
.let me know what you think and more importantly let me know if I'lm wrong.
PS: As mentioned at the top of the issue I'm happy to contribute
The text was updated successfully, but these errors were encountered: