-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set up tests and Github actions workflow #19
Comments
@Lathrisk is going to take a look at this in a couple of weeks, and I can help out/code review etc. I don't think the sphinx base involves any testing, so we'll need to write/borrow from other standards. |
To map this out in a little more detail, here is a suggested starting list for tests to add as Github actions (to be run on each push and pull request) Generic tests
Schema specific tests
Others do please feel free to add/comment. @Lathrisk in terms of timing, I think we can make a start on the generic tests in the first block of time, then depending on how far we get, finish these off in block two and get going with the schema specific tests? |
Looks good to me.
We should definitely have this, we don't want to merge PRs that break the docs build.
+1 OCDS has/had some extra tests that we can consider:
|
If you run To check links, you can also run We don't use
For Markdown formatting, I recommend setting up pre-commit with mdformat. |
Thanks @jpmckinney! |
Another test that would be useful is to check that the descriptions of related properties and codes are in sync:
|
Tests for example data are mentioned in #57 (comment) |
Some Python functionality is now in ofdskit, in that case remove python from here. |
We're still going to have a pre-commit Python script for updating derivative schema files and reference documentation so I think we want to have Python code checking as part of the tests. |
Yes - if there is any large bits of Python in the repository we still want code checking. I think my comment has been misunderstood - it was posted during our call and refers not to removing Python checking but to removing actual bits of Python code, if that code is now available in external libraries like ofdskit instead (and of course ofdskit does have Python checking!)
No, this repository is going to have a pre-comit hook to do that. If it's suitable for that code to become a generic Python library in another repository that the hook can just call, we'll do that. Suitable in this case means: If the advantages of code-sharing across standards and/or different places outweigh the disadvantages of dealing with an extra repository. (I'm not saying that is in scope for this ticket, just trying to be clear about the option here that we should consider later!) |
Yes, that all makes sense. I've created new issues. |
with nit-picky modeand turn warnings into errors #19
I have started looking at this and the draft PR, taking functionality a bit at a time to new PR's to ensure things are merged as soon as they are ready. |
with nit-picky modeand turn warnings into errors #19
with nit-picky mode and turn warnings into errors #19
You can use https://pre-commit.ci, which adds a check (which you can make mandatory or not). |
#19 All JSON files were treated with this Python: import json from collections import OrderedDict files = [ 'schema/network-schema.json', 'examples/json/network-package-additional-checks.json', 'examples/json/network-package.json', 'examples/json/spans-endpoint.json', 'examples/json/nodes-endpoint.json', 'examples/json/network-separate-files.json', 'examples/json/multiple-networks.json', 'examples/json/api-response.json', 'examples/json/network-embedded.json', 'examples/json/network-package-invalid.json', 'examples/json/network-separate-endpoints.json', ] for f in files: print(f) with open(f) as fp: data = json.load(fp, object_pairs_hook=OrderedDict) with open(f, "w") as fp: json.dump(data, fp, ensure_ascii=False, indent=2) fp.write('\n')
#19 All JSON files were treated with this Python: import json from collections import OrderedDict files = [ 'schema/network-schema.json', 'examples/json/network-package-additional-checks.json', 'examples/json/network-package.json', 'examples/json/spans-endpoint.json', 'examples/json/nodes-endpoint.json', 'examples/json/network-separate-files.json', 'examples/json/multiple-networks.json', 'examples/json/api-response.json', 'examples/json/network-embedded.json', 'examples/json/network-package-invalid.json', 'examples/json/network-separate-endpoints.json', ] for f in files: print(f) with open(f) as fp: data = json.load(fp, object_pairs_hook=OrderedDict) with open(f, "w") as fp: json.dump(data, fp, ensure_ascii=False, indent=2) fp.write('\n')
I think everything here has been done or moved to a new issue. Can we close this one? |
Yep. Thanks! |
I don't know what is provided for in https://github.com/OpenDataServices/sphinx-base, but we should set up tests and a Github actions workflow to run them.
We can likely reuse some of the tests from OCDS, which are split across:
If I remember correctly, the reason for the separation is that some tests need to be run across multiple repos (extensions and profiles). For this project, I think we can put all the tests in this repository.
There's a lot of complexity in the OCDS tests relating to extensions, which doesn't need to be reproduced for this project.
The text was updated successfully, but these errors were encountered: